• Title/Summary/Keyword: Unvoiced

Search Result 116, Processing Time 0.02 seconds

Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum (Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류)

  • 배명수;안수길
    • The Journal of the Acoustical Society of Korea
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech (한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리)

  • Hong, Mun-Ki;Shin, Ji-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal (LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구)

  • 이희원;나덕수;정찬중;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

Variable Time-Scale Modification with Voiced/Unvoiced Decision (유/무성음 결정에 다른 가변적인 시간축 변환)

  • 손단영;김원구;윤대희;차일환
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.788-797
    • /
    • 1995
  • In this paper, a variable time-scale modification using SOLA(Synchronized OverLap and Add) is proposed, which takes into consideration the different time-scaled characteristics of voiced and unvoiced speech, Generally, voiced speech is subject to higher variations in length during time-scale modification than unvoiced speech, but the conventional method performs time-scale modification at a uniform rate for all speech. For this purpose, voiced and unvoiced speech duration at various talking speeds were statistically analyzed. The sentences were then spoken at rates of 0.7, 1.3, 1.5 and 1.8 times normal speed. A clipping autocorrelation function was applied to each analysis frame to determine voiced and unvoiced speech to obtain respective variation rates. The results were used to perform variable time-scale modification to produce sentences at rates of 0.7, 1.3, 1.5, 1.8 times normal speed. To evaluate performance, a MOS test was conducted to compare the proposed voiced/unvoiced variable time-scale modification and the uniform SOLA method. Results indicate that the proposed method produces sentence quality superior to that of the conventional method.

  • PDF

Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • Son, Young-Ho;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

Variable Time-Scale Modification with Voiced/Unvoiced Decision (유/무성음 결정에 따른 가변적인 시간축 변환)

  • 손단영
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.111-115
    • /
    • 1994
  • In this paper, a variable time-scale modification using SOLA is proposed, which takes into consideration the different time-scaled characteristics of voiced and unvoiced speech. The conventional method performs time-scale modifiction at a uniform rate for all speech. For this purpose, voiced and unvoiced speech duration at various taling speeds were statistically analyzed. A clipping autocorrelation functio was applied to each analysis frame to detemine voiced and unvoiced speech to obtain respective variation rates. The results were used to perform variable time-scale modification to evaluate performance, a MOS test was conducted to compare the proposed voiced/unvoiced variable time-scale modification and the uniform SOLA method. Results indicate that the proposed method produces sentence quality superior to that of the conventional method.

  • PDF

The Recognition of Unvoiced Consonants Using Characteristic Parameters of the Phonemes (음소 특정 파라미터를 이용한 무성자음 인식)

  • 허만택;이종혁;남기곤;윤태훈;김재창;이양성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.175-182
    • /
    • 1994
  • In this study, we present unvoiced consonant recognition system using characteristic parameters of the phoneme of the each syllable. For the recognition, the characteristic parameters on the time domain such as ZCR, total energy of the consonant region and half region energy of the consonant region, and those on the frequency domain such as the frequency spectrum of the transition region are used. The objective unvoiced consonants in this study are /ㄱ/,/ㄷ/,/ㅂ/,/ㅈ/,/ㅋ/,/ㅌ/,/ㅍ/ and /ㅊ/. Each characteristic parameter of two regions extracted from these segmented unvoiced consonants are used for each recognition system of the region, independently, And complementing two outputs of each other system, the final output is to be produced. The recognition system is implemented using MLP which has learning ability. The recognition simulation results for 112 unvoiced consonant samples are that average recognition rates are 96.4$\%$ under 80$\%$ learning rates and 93.7$\%$ under 60$\%$ learning rates.

  • PDF

A Study on Decision of Voiced/Unvoiced Region through Measuring the Vocal Cord Property (성문특성 측정을 통한 유/무성음 결정에 관한 연구)

  • 민소연;강은영;신동성;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.281-284
    • /
    • 2001
  • Speech is classified into voiced signal and unvoiced signal. Since the amplitude of voiced fall off at about -20dB/decade, dynamic range is often compressed prior to spectral analysis so that details at weak, high frequencies may be visible[5][6] There is a distinct difference in spectrum slope between voiced signal and unvoiced signal. In this paper, we got the slope of each frame by using autocorrelation method, and determined voiced /unvoiced region. Also, we used energy to decide region of silence. To show experimental results, we allot to 1 value in voiced region, -1 value in unvoiced region and 0 value in silence region.

  • PDF

Segmentation of continuous Korean Speech Based on Boundaries of Voiced and Unvoiced Sounds (유성음과 무성음의 경계를 이용한 연속 음성의 세그먼테이션)

  • Yu, Gang-Ju;Sin, Uk-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2246-2253
    • /
    • 2000
  • In this paper, we show that one can enhance the performance of blind segmentation of phoneme boundaries by adopting the knowledge of Korean syllabic structure and the regions of voiced/unvoiced sounds. eh proposed method consists of three processes : the process to extract candidate phoneme boundaries, the process to detect boundaries of voiced/unvoiced sounds, and the process to select final phoneme boundaries. The candidate phoneme boudaries are extracted by clustering method based on similarity between two adjacent clusters. The employed similarity measure in this a process is the ratio of the probability density of adjacent clusters. To detect he boundaries of voiced/unvoiced sounds, we first compute the power density spectrum of speech signal in 0∼400 Hz frequency band. Then the points where this paper density spectrum variation is greater than the threshold are chosen as the boundaries of voiced/unvoiced sounds. The final phoneme boundaries consist of all the candidate phoneme boundaries in voiced region and limited number of candidate phoneme boundaries in unvoiced region. The experimental result showed about 40% decrease of insertion rate compared to the blind segmentation method we adopted.

  • PDF

Extraction of Unvoiced Consonant Regions from Fluent Korean Speech in Noisy Environments (잡음환경에서 우리말 연속음성의 무성자음 구간 추출 방법)

  • 박정임;하동경;신옥근
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.286-292
    • /
    • 2003
  • Voice activity detection (VAD) is a process that separates the noise region from silence or noise region of input speech signal. Since unvoiced consonant signals have very similar characteristics to those of noise signals, it may result in serious distortion of unvoiced consonants, or in erroneous noise estimation to can out VAD without paying special attention on unvoiced consonants. In this paper, we propose a method to extract in an explicit way the boundaries between unvoiced consonant and noise in fluent speech so that more exact VAD could be performed. The proposed method is based on histogram in frequency domain which was successfully used by Hirsch for noise estimation, and a1so on similarity measure of frequency components between adjacent frames, To evaluate the performance of the proposed method, experiments on unvoiced consonant boundary extraction was performed on seven kinds of noisy speech signals of 10 ㏈ and 15 ㏈ SNR respectively.