• Title/Summary/Keyword: 켑스트럼 변수

Search Result 12, Processing Time 0.022 seconds

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.

Voice personality transformation using an orthogonal vector space conversion (직교 벡터 공간 변환을 이용한 음성 개성 변환)

  • Lee, Ki-Seung;Park, Kun-Jong;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.96-107
    • /
    • 1996
  • A voice personality transformation algorithm using orthogonal vector space conversion is proposed in this paper. Voice personality transformation is the process of changing one person's acoustic features (source) to those of another person (target). In this paper, personality transformation is achieved by changing the LPC cepstrum coefficients, excitation spectrum and pitch contour. An orthogonal vector space conversion technique is proposed to transform the LPC cepstrum coefficients. The LPC cepstrum transformation is implemented by principle component decomposition by applying the Karhunen-Loeve transformation and minimum mean-square error coordinate transformation(MSECT). Additionally, we propose a pitch contour modification method to transform the prosodic characteristics of any speaker. To do this, reference pitch patterns for source and target speaker are firstly built up, and speaker's one. The experimental results show the effectiveness of the proposed algorithm in both subjective and objective evaluations.

  • PDF

Performance Comparison by Characteristic Parameter of Speaker Identification System using Neural Networks (신경회로망을 이용한 화자식별 시스템의 특징 파라미터에 따른 성능비교)

  • 정재룡;유재훈;배현;전병희;김성신
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.345-348
    • /
    • 2002
  • 음성인식 기술은 크게 음성인식과 화자인식 기술의 두 가지로 분류된다. 현재는 음성인식 기술이 널리 연구되고 있지만 점차 화자인식 기술의 중요성이 대두되고 있다. 본 논문에서는 화자인식 기술의 한 가지 분류로 임의 화자를 식별하기 위한 화자식별 기술을 연구 대상으로 하고 있으며, 신경회로망을 이용한 화자식별 시스템의 특징 추출 방법을 제시하고 그에 따른 성능을 비교하고 있다. 식별 단계에서 26명의 78개의 음성 샘플을 신경회로망의 역전파 알고리듬을 이용하여 학습하고, 테스트용으로 한 화자의 음성샘플이 사용되어 식별된다. 신경회로망의 입력 변수는 특징 파라미터로 선형예측계수, Mel-주파수 켑스트럼계수와 웨이블릿을 이용한 켑스트럼 계수를 사용하였다. 그 결과로써 화자식별 시스템의 신경회로망 모델2의 입력으로 혼합된 특징 파라미터를 사용한 경우가 다른 파라미터들을 사용한 경우와 비교하여 8.46~21.53%의 차를 가지고 가장 좋은 성능을 나타내었다.

Classification of muscle tension dysphonia (MTD) female speech and normal speech using cepstrum variables and random forest algorithm (켑스트럼 변수와 랜덤포레스트 알고리듬을 이용한 MTD(근긴장성 발성장애) 여성화자 음성과 정상음성 분류)

  • Yun, Joowon;Shim, Heejeong;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.91-98
    • /
    • 2020
  • This study investigated the acoustic characteristics of sustained vowel /a/ and sentence utterance produced by patients with muscle tension dysphonia (MTD) using cepstrum-based acoustic variables. 36 women diagnosed with MTD and the same number of women with normal voice participated in the study and the data were recorded and measured by ADSVTM. The results demonstrated that cepstral peak prominence (CPP) and CPP_F0 among all of the variables were statistically significantly lower than those of control group. When it comes to the GRBAS scale, overall severity (G) was most prominent, and roughness (R), breathiness (B), and strain (S) indices followed in order in the voice quality of MTD patients. As these characteristics increased, a statistically significant negative correlation was observed in CPP. We tried to classify MTD and control group using CPP and CPP_F0 variables. As a result of statistic modeling with a Random Forest machine learning algorithm, much higher classification accuracy (100% in training data and 83.3% in test data) was found in the sentence reading task, with CPP being proved to be playing a more crucial role in both vowel and sentence reading tasks.

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

Depth Extraction of Convergent-Looking Stereo Images Based on the Human Visual System (인간시각체계에 기초한 교차시각 스테레오 영상의 깊이 추출)

  • 이적식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.4A
    • /
    • pp.371-382
    • /
    • 2002
  • A camera model with optical axes parallel has been widely used for stereo vision applications. A pair of input ages are obtained from a convergent-looking stereo camera model based on the human visual system in this per, and each image is divided into quadrant regions with respect to the fixation point. The reasoning of quadrant partitions is based on the human visual system and is proven by a geometrical method. Image patches : constructed from the right and left stereo images. A modified cepstrum filter is applied to the patches and disparity vectors are determined by peak detection algorithm. The three-dimensional information for synthetic ages is obtained from the measured disparity and the convergent stereo camera model. It is shown that the experimental results of the proposed method for various stereo images are accurate around the fixation point like the human visual system.

Laryngeal Cancer Screening using Cepstral Parameters (켑스트럼 파라미터를 이용한 후두암 검진)

  • 이원범;전경명;권순복;전계록;김수미;김형순;양병곤;조철우;왕수건
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.14 no.2
    • /
    • pp.110-116
    • /
    • 2003
  • Background and Objectives : Laryngeal cancer discrimination using voice signals is a non-invasive method that can carry out the examination rapidly and simply without giving discomfort to the patients. n appropriate analysis parameters and classifiers are developed, this method can be used effectively in various applications including telemedicine. This study examines voice analysis parameters used for laryngeal disease discrimination to help discriminate laryngeal diseases by voice signal analysis. The study also estimates the laryngeal cancer discrimination activity of the Gaussian mixture model (GMM) classifier based on the statistical modelling of voice analysis parameters. Materials and Methods : The Multi-dimensional voice program (MDVP) parameters, which have been widely used for the analysis of laryngeal cancer voice, sometimes fail to analyze the voice of a laryngeal cancer patient whose cycle is seriously damaged. Accordingly, it is necessary to develop a new method that enables an analysis of high reliability for the voice signals that cannot be analyzed by the MDVP. To conduct the experiments of laryngeal cancer discrimination, the authors used three types of voices collected at the Department of Otorhinorlaryngology, Pusan National University Hospital. 50 normal males voice data, 50 voices of males with benign laryngeal diseases and 105 voices of males laryngeal cancer. In addition, the experiment also included 11 voices data of males with laryngeal cancer that cannot be analyzed by the MDVP, Only monosyllabic vowel /a/ was used as voice data. Since there were only 11 voices of laryngeal cancer patients that cannot be analyzed by the MDVP, those voices were used only for discrimination. This study examined the linear predictive cepstral coefficients (LPCC) and the met-frequency cepstral coefficients (MFCC) that are the two major cepstrum analysis methods in the area of acoustic recognition. Results : The results showed that this met frequency scaling process was effective in acoustic recognition but not useful for laryngeal cancer discrimination. Accordingly, the linear frequency cepstral coefficients (LFCC) that excluded the met frequency scaling from the MFCC was introduced. The LFCC showed more excellent discrimination activity rather than the MFCC in predictability of laryngeal cancer. Conclusion : In conclusion, the parameters applied in this study could discriminate accurately even the terminal laryngeal cancer whose periodicity is disturbed. Also it is thought that future studies on various classification algorithms and parameters representing pathophysiology of vocal cords will make it possible to discriminate benign laryngeal diseases as well, in addition to laryngeal cancer.

  • PDF

Speech synthesis using acoustic Doppler signal (초음파 도플러 신호를 이용한 음성 합성)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2016
  • In this paper, a method synthesizing speech signal using the 40 kHz ultrasonic signals reflected from the articulatory muscles was introduced and performance was evaluated. When the ultrasound signals are radiated to articulating face, the Doppler effects caused by movements of lips, jaw, and chin observed. The signals that have different frequencies from that of the transmitted signals are found in the received signals. These ADS (Acoustic-Doppler Signals) were used for estimating of the speech parameters in this study. Prior to synthesizing speech signal, a quantitative correlation analysis between ADS and speech signals was carried out on each frequency bin. According to the results, the feasibility of the ADS-based speech synthesis was validated. ADS-to-speech transformation was achieved by the joint Gaussian mixture model-based conversion rules. The experimental results from the 5 subjects showed that filter bank energy and LPC (Linear Predictive Coefficient) cepstrum coefficients are the optimal features for ADS, and speech, respectively. In the subjective evaluation where synthesized speech signals were obtained using the excitation sources extracted from original speech signals, it was confirmed that the ADS-to-speech conversion method yielded 72.2 % average recognition rates.

Voice Personality Transformation Using an Optimum Classification and Transformation (최적 분류 변환을 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.400-409
    • /
    • 2004
  • In this paper. a voice personality transformation method is proposed. which makes one person's voice sound like another person's voice. To transform the voice personality. vocal tract transfer function is used as a transformation parameter. Comparing with previous methods. the proposed method makes transformed speech closer to target speaker's voice in both subjective and objective points of view. Conversion between vocal tract transfer functions is implemented by classification of entire vector space followed by linear transformation for each cluster. LPC cepstrum is used as a feature parameter. A joint classification and transformation method is proposed, where optimum clusters and transformation matrices are simultaneously estimated in the sense of a minimum mean square error criterion. To evaluate the performance of the proposed method. transformation rules are generated from 150 sentences uttered by three male and on female speakers. These rules are then applied to another 150 sentences uttered by the same speakers. and objective evaluation and subjective listening tests are performed.

Change in acoustic characteristics of voice quality and speech fluency with aging (노화에 따른 음질과 구어 유창성의 음향학적 특성 변화)

  • Hee-June Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.45-51
    • /
    • 2023
  • Voice issues such as voice weakness that arise with age can have social and emotional impacts, potentially leading to feelings of isolation and depression. This study aimed to investigate the changes in acoustic characteristics resulting from aging, focusing on voice quality and spoken fluency. To this end, tasks involving sustained vowel phonation and paragraph reading were recorded for 20 elderly and 20 young participants. Voice-quality-related variables, including F0, jitter, shimmer, and Cepstral Peak Prominence (CPP) values, were analyzed along with speech-fluency-related variables, such as average syllable duration (ASD), articulation rate (AR), and speech rate (SR). The results showed that in voice quality-related measurements, F0 was higher for the elderly and voice quality was diminished, as indicated by increased jitter, shimmer, and lower CPP levels. Speech fluency analysis also demonstrated that the elderly spoke more slowly, as indicated by all ASD, AR, and SR measurements. Correlation analysis between voice quality and speech fluency showed a significant relationship between shimmer and CPP values and between ASD and SR values. This suggests that changes in spoken fluency can be identified early by measuring the variations in voice quality. This study further highlights the reciprocal relationship between voice quality and spoken fluency, emphasizing that deterioration in one can affect the other.