• 제목/요약/키워드: speech source

검색결과 281건 처리시간 0.022초

LF 모델에 고조파 성분을 보상한 음원 모델링 (Voice Source Modeling Using Harmonic Compensated LF Model)

  • 이건웅;김태우홍재근
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1247-1250
    • /
    • 1998
  • In speech synthesis, LF model is widely used for excitation signal for voice source coding system. But LF model does not represent the harmonic frequencies of excitation signal. We propose an effective method which use sinusoidal functions for representing the harmonics of voice source signal. The proposed method could achieve more exact voice source waveform and better synthesized speech quality than LF model.

  • PDF

LPC Vocoder 의 Excitation Source 개선에 관한 연구 (An Enhanced Excitation Source in LPC Vocoder)

  • 전지하;이근영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(II)
    • /
    • pp.881-883
    • /
    • 1987
  • This paper decribes a new technique for the generation of excitation sources in LPC system. We synthesize a speech signal using several excitation sources, according to residual signal energy and ZCR(zero Crossing Rate). One of the excitation sources mix the double differentiated glottal wave form source and noise source. As a result, we got improved speech signal than that produced by conventional LPC system.

  • PDF

L2 proficiency and effect of auditory source in processing L2 stops

  • Kong, Eun Jong;Kang, Jieun
    • 말소리와 음성과학
    • /
    • 제7권3호
    • /
    • pp.99-105
    • /
    • 2015
  • The current study investigates whether Korean-speaking adults show differential sensitivities to the sources of auditory stimuli (L1 Korean and L2 English) in utilizing VOT and f0 in the perceptual mode of L2 stops, and how the L2 proficiency interacts with the learners' low-level phonetic sensitivities in L2 perceptual mode. 48 Korean learners of English participated in the perception experiments where they rated the goodness of English /t/ and /d/ using an analogue scale. Two sets of stimuli (English and Korean sources) were prepared by manipulating VOT (6-steps) and f0 (5-steps) values of productions by an English male (L2 source condition) and a Korean male (L1 source condition). Findings showed that, in judging /t/-likeness, the listeners responded differently to the two auditory stimulus conditions by relying on VOT significantly more in English source condition than in Korean source condition. The listeners' English proficiency did not interact with these differential sensitivities to the auditory stimulus source either along the VOT dimension or the f0 dimension. The results of the current study suggest that low-level contextual information of the auditory source can affect the learners in faithfully being in the L2 perceptual mode.

주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원 (Target Speaker Speech Restoration via Spectral bases Learning)

  • 박선호;유지호;최승진
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권3호
    • /
    • pp.179-186
    • /
    • 2009
  • 본 논문에서는 학습이 가능한 특정화자의 발화음성이 있는 경우, 잡음과 반향이 있는 실 환경에서의 스테레오 마이크로폰을 이용한 특정화자 음성복원 알고리즘을 제안한다. 이를 위해 반향이 있는 환경에서 음원들을 분리하는 다중경로 암묵음원분리(convolutive blind source separation, CBSS)와 이의 후처리 방법을 결합함으로써, 잡음이 섞인 다중경로 신호로부터 잡음과 반향을 제거하고 특정화자의 음성만을 복원하는 시스템을 제시한다. 즉, 비음수 행렬분해(non-negative matrix factorization, NMF) 방법을 이용하여 특정화자의 학습음성으로부터 주파수 특성을 보존하는 기저벡터들을 학습하고, 이 기저벡터들에 기반 한 두 단계의 후처리 기법들을 제안한다. 먼저 본 시스템의 중간단계인 CBSS가 다중경로 신호를 입력받아 독립음원들을(두 채널) 출력하고, 이 두 채널 중 특정화자의 음성에 보다 가까운 채널을 자동적으로 선택한다(채널선택 단계). 이후 앞서 선택된 채널의 신호에 남아있는 잡음과 다른 방해음원(interference source)을 제거하여 특정화자의 음성만을 복원, 최종적으로 잡음과 반향이 제거된 특정화자의 음성을 복원한다(복원 단계). 이 두 후처리 단계 모두 특정화자 음성으로부터 학습한 기저벡터들을 이용하여 동작하므로 특정화자의 음성이 가지는 고유의 주파수 특성 정보를 효율적으로 음성복원에 이용 할 수 있다. 이로써 본 논문은 CBSS에 음원의 사전정보를 결합하는 방법을 제시하고 기존의 CBSS의 분리 결과를 향상시키는 동시에 특정화자만의 음성을 복원하는 시스템을 제안한다. 실험을 통하여 본 제안 방법이 잡음과 반향 환경에서 특정화자의 음성을 성공적으로 복원함을 확인할 수 있다.

녹음 환경의 차이에 따른 화자의 음원 특성 비교: 발성유형지수 k를 중심으로 (Comparison of Speaker's Source Characteristics in Different Recording Environments by Using Phonation Type Index k)

  • 이후동;강선미;박한상;장문수
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.213-224
    • /
    • 2003
  • Spoken sound includes not only speaker's source but the characteristics of vocal tract and speech radiation. This paper is based on the theory of Park[1], who proposes the Phonation Type Index k; a variable that shows the characteristic of speaker's source excluding those of speaker's vocal tract and speech radiation. With Park's theory, we collect data by changing recording environments and expanding experimental data, and analyze the data collected to see whether or not the PTI k shows good discriminating power as a variable for speaker recognition. In the experiment, we repeatedly record 8 sentences ten times for each of 5 males in the environment of a recording room and an office, extract PTI k for each speaker, and measure the discriminating power for each speaker by using the value of PTI k. The result shows that PTI k has the excellent discriminating power of speakers. We also confirm that, even if the recording environment is changed, PTI k shows similar results.

  • PDF

가려진 마이크로폰을 이용한 음원 위치 추적 (Sound Source Localization using Acoustically Shadowed Microphones)

  • 이협우;육동석
    • 음성과학
    • /
    • 제15권3호
    • /
    • pp.17-28
    • /
    • 2008
  • In many practical applications of robots, finding the location of an incoming sound is an important issue for the development of efficient human robot interface. Most sound source localization algorithms make use of only those microphones that are acoustically visible from the sound source or do not take into account the effect of sound diffraction, thereby degrading the sound source localization performance. This paper proposes a new sound source localization method that can utilize those microphones that are acoustically shadowed from the sound source. The experiment results show that use of the acoustically shadowed microphones, which receive higher signal-to-noise ratio signals than the others and are closer to the sound source, improves the performance of sound source localization.

  • PDF

음성의 준주기적 현상 분석 및 구현에 관한 연구 (Analysis and synthesis of pseudo-periodicity on voice using source model approach)

  • 조철우
    • 말소리와 음성과학
    • /
    • 제8권4호
    • /
    • pp.89-95
    • /
    • 2016
  • The purpose of this work is to analyze and synthesize the pseudo-periodicity of voice using a source model. A speech signal has periodic characteristics; however, it is not completely periodic. While periodicity contributes significantly to the production of prosody, emotional status, etc., pseudo-periodicity contributes to the distinctions between normal and abnormal status, the naturalness of normal speech, etc. Measurement of pseudo-periodicity is typically performed through parameters such as jitter and shimmer. For studying the pseudo-periodic nature of voice in a controlled environment, through collected natural voice, we can only observe the distributions of the parameters, which are limited by the size of collected data. If we can generate voice samples in a controlled manner, experiments that are more diverse can be conducted. In this study, the probability distributions of vowel pitch variation are obtained from the speech signal. Based on the probability distribution of vocal folds, pulses with a designated jitter value are synthesized. Then, the target and re-analyzed jitter values are compared to check the validity of the method. It was found that the jitter synthesis method is useful for normal voice synthesis.

MPE-LPC음성합성에서 Maximum- Likelihood Estimation에 의한 Multi-Pulse의 크기와 위치 추정 (Multi-Pulse Amplitude and Location Estimation by Maximum-Likelihood Estimation in MPE-LPC Speech Synthesis)

  • 이기용;최홍섭;안수길
    • 대한전자공학회논문지
    • /
    • 제26권9호
    • /
    • pp.1436-1443
    • /
    • 1989
  • In this paper, we propose a maximum-likelihood estimation(MLE) method to obtain the location and the amplitude of the pulses in MPE( multi-pulse excitation)-LPC speech synthesis using multi-pulses as excitation source. This MLE method computes the value maximizing the likelihood function with respect to unknown parameters(amplitude and position of the pulses) for the observed data sequence. Thus in the case of overlapped pulses, the method is equivalent to Ozawa's crosscorrelation method, resulting in equal amount of computation and sound quality with the cross-correlation method. We show by computer simulation: the multi-pulses obtained by MLE method are(1) pseudo-periodic in pitch in the case of voicde sound, (2) the pulses are random for unvoiced sound, (3) the pulses change from random to periodic in the interval where the original speech signal changes from unvoiced to voiced. Short time power specta of original speech and syunthesized speech obtained by using multi-pulses as excitation source are quite similar to each other at the formants.

  • PDF

마이크로폰 배열에서 독립벡터분석 기법을 이용한 잡음음성의 음질 개선 (Microphone Array Based Speech Enhancement Using Independent Vector Analysis)

  • 왕씽양;전성일;배건성
    • 말소리와 음성과학
    • /
    • 제4권4호
    • /
    • pp.87-92
    • /
    • 2012
  • Speech enhancement aims to improve speech quality by removing background noise from noisy speech. Independent vector analysis is a type of frequency-domain independent component analysis method that is known to be free from the frequency bin permutation problem in the process of blind source separation from multi-channel inputs. This paper proposed a new method of microphone array based speech enhancement that combines independent vector analysis and beamforming techniques. Independent vector analysis is used to separate speech and noise components from multi-channel noisy speech, and delay-sum beamforming is used to determine the enhanced speech among the separated signals. To verify the effectiveness of the proposed method, experiments for computer simulated multi-channel noisy speech with various signal-to-noise ratios were carried out, and both PESQ and output signal-to-noise ratio were obtained as objective speech quality measures. Experimental results have shown that the proposed method is superior to the conventional microphone array based noise removal approach like GSC beamforming in the speech enhancement.

효과적인 복소 스펙트럼 기반 음성 향상을 위한 시간과 주파수 영역 손실함수 조합에 관한 연구 (A study on loss combination in time and frequency for effective speech enhancement based on complex-valued spectrum)

  • 정재희;김우일
    • 한국음향학회지
    • /
    • 제41권1호
    • /
    • pp.38-44
    • /
    • 2022
  • 잡음에 오염된 음성의 명료도와 음질을 향상시키고자 음성 향상을 수행한다. 본 연구에서는 복소값 스펙트럼을 이용한 마스크기반 음성 향상에서 시간 영역 손실함수와 주파수 영역 손실함수에 따른 학습 결과를 비교하였다. 시간 영역의 음성 파형과 주파수 영역의 스펙트럼의 세부정보를 고려해 두 영역의 장점을 활용할 수 있도록 손실함수 조합에 관해 연구를 진행하였다. 시간 영역 손실함수는 Scale Invariant-Source to Noise Ratio(SI-SNR)을 이용해 계산하고, 주파수 영역 손실함수는 복소값 스펙트럼과 크기 스펙트럼을 Mean Squared Error(MSE)로 계산하여 사용하였고, sin 함수를 이용해 위상에 대한 손실함수를 계산하였다. 손실함수 조합은 시간 영역 손실함수인 SI-SNR과 각 주파수 영역 손실함수를 조합하였다. 또한 크기 값과 위상 값을 모두 고려할 수 있도록 SI-SNR과 크기 스펙트럼, 위상에 관련된 손실함수들도 조합하여 실험을 진행하였다. 음성 향상 결과는 Source-to-Distortion Ratio(SDR), Perceptual Evaluation of Speech Quality(PESQ), Short-Time Objective Intelligibility(STOI)를이용해 성능 비교 평가를 진행하였다. 음성 향상 결과를 확인해보기 위해 스펙트럼 상에서 비교를 진행하였다. TIMIT 데이터베이스를 이용한 실험 결과, 시간 영역 또는 주파수 영역 손실함수보다 SI-SNR과 크기 스펙트럼을 조합한 손실함수를 사용하여 음성 향상을 학습했을 때 가장 높은 성능을 보였다.