• Title/Summary/Keyword: speech quality

Search Result 807, Processing Time 0.029 seconds

The Acoustic Changes of Voice after Uvulopalatopharyngoplasty (구개인두성형술 후 음성의 음향학적 변화)

  • Hong, K.H.;Kim, S.W.;Yoon, H.W.;Cho, Y.S.;Moon, S.H.;Lee, S.H.
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.23-37
    • /
    • 2001
  • The primary sound produced by the vibration of vocal folds reaches the velopharyngeal isthmus and is directed both nasally and orally. The proportions of the each component is determined by the anatomical and functional status of the soft palate. The oral sounds composed of oral vowels and consonants according to the status of vocal tract, tongue, palate and lips. The nasal sounds composed of nasal consonants and nasal vowels, and further modified according to the status of the nasal airway, so anatomical abnormalities in the nasal cavity will influence nasal sound. The measurement of nasal sounds of speech has relied on the subjective scoring by listeners. The nasal sounds are described with nasality and nasalization. Generally, nasality has been assessed perceptually in the effect of maxillofacial procedures for cleft palate, sleep apnea, snoring and nasal disorders. The nasalization is considered as an acoustic phenomenon. Snoring and sleep apnea is a typical disorders due to abundant velopharynx. The sleep apnea has been known as a cessation of breathing for at least 10 seconds during sleep. Several medical and surgical methods for treating sleep apnea have been attempted. The uvulopalatopharyngoplasty(UPPP) involves removal of 1.0 to 3.0 cm of soft palate tissue with removal of redundant oropharyngeal mucosa and lateral tissue from the anterior and sometimes posterior faucial pillars. This procedure results in a shortened soft palate and a possible risk following this surgery may be velopharyngeal malfunctioning due to the shortened palate. Few researchers have systematically studied the effects of this surgery as it relates to speech production. Some changes in the voice quality such as resonance (nasality), articulation, and phonation have been reported. In view of the conflicting reports discussed, there remains some uncertainty about the speech status in patients following the snoring and sleep apnea surgery. The study was conducted in two phases: 1) acoustic analysis of oral and nasal sounds, and 2) evaluation of nasality.

  • PDF

A New MPEG Reference Model for Unified Speech and Audio Coding (통합 음성/오디오 부호화를 위한 새로운 MPEG 참조 모델)

  • Song, Jeong-Ook;Oh, Hyen-O;Kang, Hong-Goo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.74-80
    • /
    • 2010
  • Speech and audio codecs have been developed based on different type of coding technologies since they have different characteristics of signal and applications. In harmony with a convergence between broadcasting and telecommunication system, international organizations for standardization such as 3GPP and ISO/IEC MPEG have tried to compress and transmit multimedia signals using unified codecs. MPEG recently initiated an activity to standardize the USAC (Unified speech and audio coding). However, USAC RM (Reference model) software has been problematic since it has a complex hierarchy, many useless source codes and poor quality of the encoder. To solve these problems, this paper introduces a new RM software designed with an open source paradigm. It was presented at the MPEG meeting in April, 2010 and the source code was released in June.

An Automatic Data Construction Approach for Korean Speech Command Recognition

  • Lim, Yeonsoo;Seo, Deokjin;Park, Jeong-sik;Jung, Yuchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.17-24
    • /
    • 2019
  • The biggest problem in the AI field, which has become a hot topic in recent years, is how to deal with the lack of training data. Since manual data construction takes a lot of time and efforts, it is non-trivial for an individual to easily build the necessary data. On the other hand, automatic data construction needs to handle data quality issue. In this paper, we introduce a method to automatically extract the data required to develop Korean speech command recognizer from the web and to automatically select the data that can be used for training data. In particular, we propose a modified ResNet model that shows modest performance for the automatically constructed Korean speech command data. We conducted an experiment to show the applicability of the command set of the health and daily life domain. In a series of experiments using only automatically constructed data, the accuracy of the health domain was 89.5% in ResNet15 and 82% in ResNet8 in the daily lives domain, respectively.

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Speech Spectrum Enhancement Combined with Frequency-weighted Spectrum Shaping Filter and Wiener Filter (주파수가중 스펙트럼성형필터와 위너필터를 결합한 음성 스펙트럼 강조)

  • Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.10
    • /
    • pp.1867-1872
    • /
    • 2016
  • In the area of digital signal processing, it is necessary to improve the quality of the speech signal after removing the background noise which exists in a various real environments. The important thing to consider when removing the background noise acoustically is that to solve the problem, depending on the information of the human auditory mechanism is mainly the amplitude spectrum of the speech signal. This paper introduces the characteristics of a frequency-weighted spectrum shaping filter for the extraction of the amplitude spectrum of the speech signal with the primary purpose. Therefore, this paper proposes an algorithm using the methods of a Wiener filter and the frequency-weighted spectrum shaping filter according to the acoustic model, after extracted the amplitude spectral information in the noisy speech signal. The spectral distortion (SD) output of the proposed algorithm is experimentally improved more than 5.28 dB compared to a conventional method.

Mixed Noise Cancellation by Independent Vector Analysis and Frequency Band Beamforming Algorithm in 4-channel Environments (4채널 환경에서 독립벡터분석 및 주파수대역 빔형성 알고리즘에 의한 혼합잡음제거)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.811-816
    • /
    • 2019
  • This paper first proposes a technique to separate clean speech signals and mixed noise signals by using an independent vector analysis algorithm of frequency band for 4 channel speech source signals with a noise. An improved output speech signal from the proposed independent vector analysis algorithm is obtained by using the cross-correlation between the signal outputs from the frequency domain delay-sum beamforming and the output signals separated from the proposed independent vector analysis algorithm. In the experiments, the proposed algorithm improves the maximum SNRs of 10.90dB and the segmental SNRs of 10.02dB compared with the frequency domain delay-sum beamforming algorithm for the input mixed noise speeches with 0dB and -5dB SNRs including white noise, respectively. Therefore, it can be seen from this experiment and consideration that the speech quality of this proposed algorithm is improved compared to the frequency domain delay-sum beamforming algorithm.

Determinant-based two-channel noise reduction method using speech presence probability (음성존재확률을 이용한 행렬식 기반 2채널 잡음제거기법)

  • Park, Jinuk;Hong, Jungpyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.649-655
    • /
    • 2022
  • In this paper, a determinant-based two-channel noise reduction method which utilizes speech presence probability (SPP) is proposed. The proposed method improves noise reduction performance from the conventional determinant-based two-channel noise reduction method in [7] by applying SPP to the Wiener filter gain. Consequently, the proposed method adaptively controls the amount of noise reduction depending on the SPP. For performance evaluation, the segmental signal-to-noise ratio (SNR), the perceptual evaluation of speech quality, the short time objective intelligibility, and the log spectral distance were measured in the simulated noisy environments considered various types of noise, reverberation, SNR, and the direction and number of noise sources. The experimental results presented that determinant-based methods outperform phase difference-based methods in most cases. In particular, the proposed method achieved the best noise reduction performance maintaining minimum speech distortion.

STRUCTURED CODEWORD SEARCH FOR VECTOR QUANTIZATION (백터양자화가의 구조적 코더 찾기)

  • 우홍체
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.467-470
    • /
    • 2000
  • Vector quantization (VQ) is widely used in many high-quality and high-rate data compression applications such as speech coding, audio coding, image coding and video coding. When the size of a VQ codebook is large, the computational complexity for the full codeword search method is a significant problem for many applications. A number of complexity reduction algorithms have been proposed and investigated using such properties of the codebook as the triangle inequality. This paper proposes a new structured VQ search algorithm that is based on a multi-stage structure for searching for the best codeword. Even using only two stages, a significant complexity reduction can be obtained without any loss of quality.

  • PDF

Objective Measure for Estimating Subjective Voice Quality in Wireless Communication (CDMA 이동통신 시스템에서의 주관적 음질을 추정하기 위한 객관적 척도)

  • 백금란
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.297-302
    • /
    • 1998
  • 본 논문에서는 CDMA(Code Division Multiple Access) 채널을 통과하면서 여러 가지 형태로 손상된 음성에 대한 주관적 음질 평가를 할 수 있는 객관적 척도에 대한 연구를 수행하였다. 즉, CDMA 채널을 통과한 음성 신호에 대하여 주관적 음질 평가 방법 중 가장 많이 사용되고 있는 MOS(Mean Opinion Score) 테스트를 수행하고, 이 MOS 테스트 결과를 추정할 수 있는 객관척도 알고리즘을 시뮬레이션 하였다. 이러한 연구 결과로 PSQM(Perceptual Speech Quality Measure)을 CDMA 채널 환경에 맞게 수정하여 우수한 성능의 객관적 음질 평가 방법을 얻었다.

  • PDF

Spectral Feature Transformation for Compensation of Microphone Mismatches

  • Jeong, So-Young;Oh, Sang-Hoon;Lee, Soo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4E
    • /
    • pp.150-154
    • /
    • 2003
  • The distortion effects of microphones have been analyzed and compensated at mel-frequency feature domain. Unlike popular bias removal algorithms a linear transformation of mel-frequency spectrum is incorporated. Although a diagonal matrix transformation is sufficient for medium-quality microphones, a full-matrix transform is required for low-quality microphones with severe nonlinearity. Proposed compensation algorithms are tested with HTIMIT database, which resulted in about 5 percents improvements in recognition rate over conventional CMS algorithm.