• Title/Summary/Keyword: 음향 파라미터

Search Result 387, Processing Time 0.025 seconds

Endpoint Detection of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 끝점검출)

  • 석종원;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.57-64
    • /
    • 1999
  • In this paper, we investigated the robust endpoint detection algorithm in noisy environment. A new feature parameter based on a discrete wavelet transform is proposed for word boundary detection of isolated utterances. The sum of standard deviation of wavelet coefficients in the third coarse and weighted first detailed scale is defined as a new feature parameter for endpoint detection. We then developed a new and robust endpoint detection algorithm using the feature found in the wavelet domain. For the performance evaluation, we evaluated the detection accuracy and the average recognition error rate due to endpoint detection in an HMM-based recognition system across several signal-to-noise ratios and noise conditions.

  • PDF

An improved automatic segmentation algorithm (자동 음성 분할 시스템의 성능 향상)

  • Kim Mu Jung;Kwon Chul Hong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.45-48
    • /
    • 2002
  • 본 논문에서는 한국어 음성 합성기 데이터베이스 구축을 위하여 HMM을 이용하여 자동으로 음소경계를 추출하고, 음성 파라미터를 이용하여 그 결과를 보정하는 반자동 음성분할 시스템을 구현하였다. 개발된 시스템은 16KHz로 샘플링된 음성을 대상으로 삼았고, 레이블링 단위인 음소는 39개를 선정하였고, 음운현상을 고려한 확장 모노폰도 선정하였다. 그리고 언어학적 입력방식으로는 음소표기와 철자표기를 사용하였으며, 패턴 매칭 방법으로는 HMM을 이용하였다. 유성음/무성음/묵음 구간 분류에는 ZCR, Log Energy, 주파수 대역별 에너지 분포 등의 파라미터를 사용하였다. 개발된 시스템의 훈련된 음성은 정치, 경제, 사회, 문화, 날씨 등의 코퍼스를 사용하였으며, 성능평가를 위해 훈련에 사용되지 않은 문장 데이터베이스에 대해서 자동 음성 분할 실험을 수행하였다. 실험 결과, 수작업에 의해서 분할된 음소경계 위치와의 오차가 10ms 이내가 $87\%$, 30ms 이내가 $91\%$가 포함되었다.

  • PDF

On a Pitch Alteration Method using Scaling the Harmonics Compensated with the Phase for Speech Synthesis (위상 보상된 고조파 스케일링에 의한 음성합성용 피치변경법)

  • Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.91-97
    • /
    • 1994
  • In speech processing, the waveform codings are concerned with simply preserving the waveform of signal through a redundancy reduction process. In the case of speech synthesis, the waveform codings with high quality are mainly used to the synthesis by analysis. Because the parameters of this coding are not classified as both excitation and vocal tract, it is difficult to apply the waveform coding to the synthesis by rule. Thus, in order to apply the waveform coding to synthesis by rule, it is necessary to alter the pitches. In this paper, we proposed a new pitch alteration method that can change the pitch period in waveform coding by dividing the speech signals into the vocal tract and excitation parameters. This method is a time-frequency domain method preserving the phase component of the waveform in time domain and the magnitude component in frequency domain. Thus, it is possible that the waveform coding is carried out the synthesis by rule in speech processing. In case of using the algorithm, we can obtain spectrum distortion with $2.94\%$. That is, the spectrum distortion is decreased more $5.06\%$ than that of the pitch alteration method in time domain.

  • PDF

The Effect of Auditory Condition on Voice Parameter of Teacher (청각 환경이 교사의 음성 파라미터에 미치는 영향)

  • Lee Ju-Young;Baek Kwang-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.207-212
    • /
    • 2006
  • The purpose of this study was to compare voice parameters in auditory conditions (normal/noise/music) between a teacher group and a control group. Results of statistical analysis showed that the teacher group had higher jitter (%) and shimmer (%) values than the control group. It indicated that the teacher group had larger variations in pitch and dynamic of their voice. In the teacher group, the voice under noisy condition showed a higher value of fundamental frequency than that under normal condition. though its fundamental frequency did not show any significant difference between the noisy condition and the musical condition. In the control group, however, although the voice under noisy condition also showed a higher value of fundamental frequency than that under normal condition, its fundamental frequency was significantly different between the noisy condition and the musical condition.

A PCA-based MFDWC Feature Parameter for Speaker Verification System (화자 검증 시스템을 위한 PCA 기반 MFDWC 특징 파라미터)

  • Hahm Seong-Jun;Jung Ho-Youl;Chung Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.36-42
    • /
    • 2006
  • A Principal component analysis (PCA)-based Mel-Frequency Discrete Wavelet Coefficients (MFDWC) feature Parameters for speaker verification system is Presented in this Paper In this method, we used the 1st-eigenvector obtained from PCA to calculate the energy of each node of level that was approximated by. met-scale. This eigenvector satisfies the constraint of general weighting function that the squared sum of each component of weighting function is unity and is considered to represent speaker's characteristic closely because the 1st-eigenvector of each speaker is fairly different from the others. For verification. we used Universal Background Model (UBM) approach that compares claimed speaker s model with UBM on frame-level. We performed experiments to test the effectiveness of PCA-based parameter and found that our Proposed Parameters could obtain improved average Performance of $0.80\%$compared to MFCC. $5.14\%$ to LPCC and 6.69 to existing MFDWC.

Syllable Recognition of HMM using Segment Dimension Compression (세그먼트 차원압축을 이용한 HMM의 음절인식)

  • Kim, Joo-Sung;Lee, Yang-Woo;Hur, Kang-In;Ahn, Jum-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.40-48
    • /
    • 1996
  • In this paper, a 40 dimensional segment vector with 4 frame and 7 frame width in every monosyllable interval was compressed into a 10, 14, 20 dimensional vector using K-L expansion and neural networks, and these was used to speech recognition feature parameter for CHMM. And we also compared them with CHMM added as feature parameter to the discrete duration time, the regression coefficients and the mixture distribution. In recognition test at 100 monosyllable, recognition rates of CHMM +${\bigtriangleup}$MCEP, CHMM +MIX and CHMM +DD respectively improve 1.4%, 2.36% and 2.78% over 85.19% of CHMM. And those using vector compressed by K-L expansion are less than MCEP + ${\bigtriangleup}$MCEP but those using K-L + MCEP, K-L + ${\bigtriangleup}$MCEP are almost same. Neural networks reflect more the speech dynamic variety than K-L expansion because they use the sigmoid function for the non-linear transform. Recognition rates using vector compressed by neural networks are higher than those using of K-L expansion and other methods.

  • PDF

Digit Recognition Rate Comparision in DHMM and Neural Network (DHMM과 신경망에서 숫자음 인식률 비교)

  • 박정환;이원일;황태문;이종혁
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.171-174
    • /
    • 2002
  • 음성 신호는 언어정보, 개인성, 감정 등의 여러 가지 정보를 포함한 음향학적인 신호인 동시에 가장 자연스럽고 널리 쓰이는 의사소통 수단의 하나이다. 본 연구에서는 저장된 음성 신호에서 추출한 특징 파라미터를 사용한 경우와 음성 특징파라미터에 입술 패턴에 대한 영상정보를 통시에 사용한 경우 DHMM과 신경망을 통하여 각각 인식률을 비교해 보았다. 그 결과 입술패턴에 대할 영상정보도 음성인식에 사용 할 수 있음을 알 수 있었다.

  • PDF

Application of Acoustic Emission Technique and Friction Welding for Excavator Hose Nipple (굴삭기용 호스 니플의 마찰용접과 음향방출기법의 적용)

  • Kong, Yu-Sik;Lee, Jin-Kyung
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.33 no.5
    • /
    • pp.436-442
    • /
    • 2013
  • Friction welding is a very useful joining process to weld metals which have axially symmetric cross section. In this paper, the feasibility of industry application was determined by analyzing the mechanical properties of weld region for a specimen of tube-to-tube shape for excavator hose nipple with friction welding, and optimized welding variables were suggested. In order to accomplish this object, friction heating pressure and friction heating time were selected as the major process variables and the experiment was performed in three levels of each parameter. An acoustic emission(AE) technique was applied to evaluate the optimal friction welding conditions nondestructively. AE parameters of accumulative count and event were analyzed in terms of generating trend of AE signals across the full range of friction weld. The typical waveform and frequency spectrum of AE signals which is generated by friction weld were discussed. From this study the optimal welding variables could be suggested as rotating speed of 1300 rpm, friction heating pressure of 15 MPa, and friction heating time of 10 sec. AE event was a useful parameter to estimate the tensile strength of tube-to tube specimen with friction weld.

Emotional Speech Synthesis using the Emotion Editor Program (감정 편집기를 이용한 감정 음성 합성)

  • Chun Heejin;Lee Yanghee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.79-82
    • /
    • 2000
  • 감정 표현 음성을 합성하기 위하여 본 연구에서는 감정 음성 데이터의 피치와 지속시간의 음절 유형별 및 어절 내 음절 위치에 따른 변화를 분석하였고, 스펙트럼 포락이 감정 변화에 어떤 영향을 미치는지를 분석하였다. 그 결과, 피치와 지속시간의 음절 유형별, 어절 내 음절 위치에 따른 변화와, 스펙트럼 포락 등도 감정 변화에 영향을 미치는 것으로 나타났다. 또한, 감정 음성의 음향학적 분석 결과를 적용하여 감정 음성을 합성하고 평가하기 위하여, 평상 음성의 음운 및 운율 파라미터 (피치, 에너지, 지속시간, 스펙트럼 포락)를 조절함으로써 감정 음성을 생성하는 감정 편집기를 구현하였다.

  • PDF

The Development of audio codec using binaural cue coding technologies (Binaural Cue Coding 기술을 이용한 오디오 코덱 구현)

  • Seo Jeongil;Kang Kyeongok;Lee Byonghwa;Hahn Minsoo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.137-140
    • /
    • 2004
  • 낮은 대역폭에서 다채널 다객체 오디오 신호를 전송하기위해 새롭게 제안된 Spatial Audio Coding 기술은 멀티채널 오디오 신호를 다운믹싱하고 나머지 채널은 음향공간상의 위치정보를 나타내는 파라미터들로 압축하여 표현하는 파라메트릭 압축 방식이다. 본 논문에서는 Spatial Audio Coding 기술중의 하나인 BCC 기술을 이용하여 스테레오 오디오 코덱을 구현하고, 주관듣기평가 실험을 통하여 AAC와 비슷한 성능을 나타내면서도 높은 압축율을 얻을 수 있음을 확인하였다.

  • PDF