• Title/Summary/Keyword: speech quality

Search Result 807, Processing Time 0.022 seconds

A study on loss combination in time and frequency for effective speech enhancement based on complex-valued spectrum (효과적인 복소 스펙트럼 기반 음성 향상을 위한 시간과 주파수 영역 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.38-44
    • /
    • 2022
  • Speech enhancement is performed to improve intelligibility and quality of the noise-corrupted speech. In this paper, speech enhancement performance was compared using different loss functions in time and frequency domains. This study proposes a combination of loss functions to utilize advantage of each domain by considering both the details of spectrum and the speech waveform. In our study, Scale Invariant-Source to Noise Ratio (SI-SNR) is used for the time domain loss function, and Mean Squared Error (MSE) is used for the frequency domain, which is calculated over the complex-valued spectrum and magnitude spectrum. The phase loss is obtained using the sin function. Speech enhancement result is evaluated using Source-to-Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI). In order to confirm the result of speech enhancement, resulting spectrograms are also compared. The experimental results over the TIMIT database show the highest performance when using combination of SI-SNR and magnitude loss functions.

A Study of Acoustic Masking Effect from Formant Enhancement in Digital Hearing Aid (디지털 보청기에서의 포먼트 강조에 의한 마스킹 효과 연구)

  • Jeon, Yu-Yong;Kil, Se-Kee;Yoon, Kwang-Sub;Lee, Sang-Min
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.5
    • /
    • pp.13-20
    • /
    • 2008
  • Although digital hearing aid algorithms have been developed to compensate hearing loss and to help hearing impaired people to communicate with others, digital hearing aid user still complain about difficulty of hearing the speech. The reason could be the quality of speech through digital hearing aid is insufficient to understand the speech caused by feedback, residual noise and etc. And another thing is masking effect among formants that makes sound quality low. In this study, we measured the masking characteristics of normal listeners and hearing impaired listeners having presbyacusis to confirm masking effect in speech itself. The experiment is composed of 5 tests; pure tone test, speech reception threshold (SRT) test, word recognition score (WRS) test, puretone masking test and speech masking test. In speech masking test, there are 25 speeches in each speech set. And log likelihood ratio (LLR) is introduced to evaluate the distortion of each speech objectively. As a result, the speech perception became lower by increasing the quantity of formant enhancement. And each enhanced speech in a speech set has statistically similar LLR, however speech perception is not. It means that acoustic masking effect rather than distortion influences speech perception. In actuality, according to the result of frequency analysis of the speech that people can not answer correctly, level difference between first formant and second formant is about 35dB, and it is similar to result of pure tone masking test(normal hearing subject:36.36dB, hearing impaired subject:32.86dB). Characteristics of masking effect is not similar between normal listeners and hearing impaired listeners. So it is required to check the characteristics of masking effect before wearing a hearing aid and to apply this characteristics to fitting.

A Study on TSIUVC Approximate-Synthesis Method using Least Mean Square and Frequency Division (주파수 분할 및 최소 자승법을 이용한 TSIUVC 근사합성법에 관한 연구)

  • 이시우
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.462-468
    • /
    • 2003
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech quality in case coexist with a voiced and an unvoiced consonants in a frame. So, I propose TSIUVC(Transition Segment Including Unvoiced Consonant) searching and extraction method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This paper present a new method of TSIUVC approximate-synthesis by using Least Mean Square and frequency band division. As a result, this method obtain a high quality approximation-synthesis waveforms within TSIUVC by using frequency information of 0.547KHz below and 2.813KHz above. The important thing is that the maximum error signal can be made with low distortion approximation-synthesis waveform within TSIUVC. This method has the capability of being applied to a new speech coding of Voiced/Silence/TSIUVC, speech analysis and speech synthesis.

  • PDF

On a Multiband Nonuniform Samping Technique with a Gaussian Noise Codebook for Speech Coding (가우시안 코드북을 갖는 다중대역 비균일 음성 표본화법)

  • Chung, Hyung-Goue;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.110-114
    • /
    • 1997
  • When applying the nonuniform sampling to noisy speech signal, the required data rate increases to be comparable to or more than that by uniform sampling such as PCM. To solve this problem, we have proposed the waveform coding method, multiband nonuniform waveform coding(MNWC), applying the nonuniform sampling to band-separated speech signal[7]. However, the speech quality is deteriorated when it is compared to the uniform sampling method, since the high band is simply modeled as a Gaussian noise with average level. In this paper, as a good method to overcome this drawback, the high band is modeled as one of 16 codewords having different center frequencies. By doing this, with maintaining high speech quality as MOS score of average 3.16, the proposed method achieves 1.5 times higher compression ratio than that of the conventional nonuniform sampling method(CNSM).

  • PDF

Quality Assessment of Telephone Speech with ATM Circuit Emulation Services (ATM 망을 통한 Circuit Emulation 서비스에서 전화음성의 품질평가)

  • Cho, Young-Soon;Seo, Jeong-Wook;Bae, Keun-Sung
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.6
    • /
    • pp.156-163
    • /
    • 1998
  • The ATM network provides ATM CES(Circuit Emulation Services) with AAL1 for CBR(constant bit rate) services such as telephone speech. In this study, quality assessment of telephone speech with CES over ATM was performed and discussed. For this, interoperability between ATM network and structured/unstructured DS1 link was modeled for simulation. And for qualiy assessment of telephone speech, SNR and MOS were used as an objective and a subjective measure, respectively. Experimental results have shown that MOS score 4 as well as SNR 30dB could be obtained at CLR of $10^{-3}$ or below for speech signal.

  • PDF

A single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for push-to-talk communication (Push-to-talk 통신을 위한 진폭 및 위상 복원 기반의 단일 채널 음성 향상 방식)

  • Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.64-69
    • /
    • 2017
  • In this paper, we propose a single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for PTT (Push-To-Talk) communication. The proposed method combines the spectral amplitude and phase enhancement to provide high-quality speech unlike other single-channel speech enhancement methods which only use spectral amplitudes. We carried out side-by-side comparison experiment in various non-stationary noise environments in order to evaluate the performance of the proposed method. The experimental results show that the proposed method provides high quality speech better than other methods under different noise conditions.

Discussions on Auditory-Perceptual Evaluation Performed in Patients With Voice Disorders (음성장애 환자에서 시행되는 청지각적 평가에 대한 논의)

  • Lee, Seung Jin
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.32 no.3
    • /
    • pp.109-117
    • /
    • 2021
  • The auditory-perceptual evaluation of speech-language pathologists (SLP) in patients with voice disorders is often regarded as a touchstone in the multi-dimensional voice evaluation procedures and provides important information not available in other assessment modalities. Therefore, it is necessary for the SLPs to conduct a comprehensive and in-depth evaluation of not only voice but also the overall speech production mechanism, and they often encounter various difficulties in the evaluation process. In addition, SLPs should strive to avoid bias during the evaluation process and to maintain a wide and constant spectrum of severity for each parameter of voice quality. Lastly, it is very important for the SLPs to perform a team approach by documenting and delivering important information pertaining to auditory-perceptual characteristics in an appropriate and efficient way through close communication with the laryngologists.

A Study on the Robust Bimodal Speech-recognition System in Noisy Environments (잡음 환경에 강인한 이중모드 음성인식 시스템에 관한 연구)

  • 이철우;고인선;계영철
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.28-34
    • /
    • 2003
  • Recent researches have been focusing on jointly using lip motions (i.e. visual speech) and speech for reliable speech recognitions in noisy environments. This paper also deals with the method of combining the result of the visual speech recognizer and that of the conventional speech recognizer through putting weights on each result: the paper proposes the method of determining proper weights for each result and, in particular, the weights are autonomously determined, depending on the amounts of noise in the speech and the image quality. Simulation results show that combining the audio and visual recognition by the proposed method provides the recognition performance of 84% even in severely noisy environments. It is also shown that in the presence of blur in images, the newly proposed weighting method, which takes the blur into account as well, yields better performance than the other methods.

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

A Study of the SPR (Singing Power Ratio) on the Singing Voice in Singing Students (성악 전공 학생의 가칭 시 음성의 SPR(Singing Power Ratio)에 관한 연구)

  • Jo, Sung-Mi;Jeong, Ok-Ran;Lee, Sang-Ouk
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.121-127
    • /
    • 2004
  • This study attempted to provide a spectrum analysis for quantitative evaluation of singing voice quality of singing students rather than the presence or absence of the singer's formant. The regression analysis was used to analyse the relationship between ringing quality, SPR, and SPP of singing voice of college student subjects majoring in music. This study measured singing. power ratio (SPR) in 41 singing students. Digital audio recordings were made in sung vowels for acoustic analyses. Each sample was judged by 1 experienced singing teacher and 4 voice pathologists on one semantic bipolar 7-point scales (ringing-dull). The results showed that the SPR and SPP had significant correlations with ringing quality. The SPR had a significant relationship with ringing quality on singing voice in singing students. The SPR can be an important quantitative measurement for evaluating singing voice quality.

  • PDF