• Title/Summary/Keyword: Voice transformation

Search Result 54, Processing Time 0.022 seconds

Voice Quality Criteria for Heterogenous Network Communication Under Mobile-VoIP Environments

  • Choi, Jae-Hun;Seol, Soon-Uk;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3E
    • /
    • pp.99-108
    • /
    • 2009
  • In this paper, we suggest criteria for objective measurement of speech quality in mobile VoIP (Voice over Internet Protocol) services over wireless mobile internet such as mobile WiMAX networks. This is the case that voice communication service is available under other networks. When mobile VoIP service users in the mobile internet network based on packet call up PSTN and mobile network users, but there have not been relevant quality indexes and quality standards for evaluating speech quality of mobile VoIP. In addition, there are many factors influencing on the speech quality in packet network. Especially, if the degraded speech with packet loss transfers to the other network users through the handover, voice communication quality is significantly deteriorated by the transformation of speech codecs. In this paper, we eventually adopt the Gilbert-Elliot channel model to characterize packet network and assess the voice quality through the objective speech quality method of ITU-T P. 862. 1 MOS-LQO for the various call scenario from mobile VoIP service user to PSTN and mobile network users under various packet loss rates in the transmission channel environments. Our simulation results show that transformation of speech codecs results in the degraded speech quality for different transmission channel environments when mobile VoIP service users call up PSTN and mobile network users.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

Voice Conversion Using Linear Multivariate Regression Model and LP-PSOLA Synthesis Method (선형다변회귀모델과 LP-PSOLA 합성방식을 이용한 음성변환)

  • 권홍석;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.15-23
    • /
    • 2001
  • This paper presents a voice conversion technique that modifies the utterance of a source speaker as if it were spoken by a target speaker. Feature parameter conversion methods to perform the transformation of vocal tract and prosodic characteristics between the source and target speakers are described. The transformation of vocal tract characteristics is achieved by modifying the LPC cepstral coefficients using Linear Multivariate Regression (LMR). Prosodic transformation is done by changing the average pitch period between speakers, and it is applied to the residual signal using the LP-PSOLA scheme. Experimental results show that transformed speech by LMR and LP-PSOLA synthesis method contains much characteristics of the target speaker.

  • PDF

A Study on Voice Web Browsing in Automatic Speech Recognition Application System (음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 윤재석
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.949-954
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in reusability and portability, in this study, Automatic Speech Recognition Application System with Javabeans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.

Detection of Pathological Voice Using Linear Discriminant Analysis

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • MALSORI
    • /
    • no.64
    • /
    • pp.77-88
    • /
    • 2007
  • Nowadays, mel-frequency cesptral coefficients (MFCCs) and Gaussian mixture models (GMMs) are used for the pathological voice detection. This paper suggests a method to improve the performance of the pathological/normal voice classification based on the MFCC-based GMM. We analyze the characteristics of the mel frequency-based filterbank energies using the fisher discriminant ratio (FDR). And the feature vectors through the linear discriminant analysis (LDA) transformation of the filterbank energies (FBE) and the MFCCs are implemented. An accuracy is measured by the GMM classifier. This paper shows that the FBE LDA-based GMM is a sufficiently distinct method for the pathological/normal voice classification, with a 96.6% classification performance rate. The proposed method shows better performance than the MFCC-based GMM with noticeable improvement of 54.05% in terms of error reduction.

  • PDF

Real-time Voice Change System using Pitch Change (피치 변환을 사용한 실시간 음성 변환 시스템)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.466-469
    • /
    • 2004
  • In this paper, real-time voice change method using pitch change technique is proposed to change one's voice to the other voice. For this purpose, sampling rate change method using DFT (Discrete Fourier Transform) method and time scale modification method using SOLA (Synchronized Overlap and Add) method is combined to change pitch. In order to evaluate the performance of the proposed method, voice transformation experiments were conducted. Experimental results showed that original speech signal is changed to the other speech signal in which original speaker's identity is difficult to find. The system is implemented using TI TMS320C6711DSK board to verify the system runs in real time.

  • PDF

A Study on Voice Web Browsing in JAVA Beans Component Architecture Automatic Speech Recognition Application System. (JAVABeans Component 구조를 갖는 음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 장준식;윤재석
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.273-276
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in speed and implantation, in this study, Automatic Speech Recognition Application System with Java beans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

  • PDF

Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface (차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점)

  • Kim, Kihyun;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.

Real-time Voice Change System using Pitch Change (피치 변환을 사용한 실시간 음성 변환 시스템)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.759-763
    • /
    • 2004
  • In this paper, real-time voice change method using pitch change technique is proposed to change one's voice to the other voice. For this purpose, sampling rate change method using DFT (Discrete Fourier Transform) method and time scale modification method using SOLA (Synchronized Overlap and Add) method is combined to change pitch. In order to evaluate the performance of the proposed method, voice transformation experiments were conducted. Experimental results showed that original speech signal is changed to the other speech signal in which original speaker's identity is difficult to find. The system is implemented using TI TMS320C6711DSK board to verify the system runs in real time.