• 제목/요약/키워드: Unvoiced

검색결과 116건 처리시간 0.021초

Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류 (Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum)

  • 배명수;안수길
    • 한국음향학회지
    • /
    • 제4권1호
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리 (Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech)

  • 홍문기;신지영;강선미
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구 (A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal)

  • 이희원;나덕수;정찬중;배명진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

유/무성음 결정에 다른 가변적인 시간축 변환 (Variable Time-Scale Modification with Voiced/Unvoiced Decision)

  • 손단영;김원구;윤대희;차일환
    • 전자공학회논문지B
    • /
    • 제32B권5호
    • /
    • pp.788-797
    • /
    • 1995
  • In this paper, a variable time-scale modification using SOLA(Synchronized OverLap and Add) is proposed, which takes into consideration the different time-scaled characteristics of voiced and unvoiced speech, Generally, voiced speech is subject to higher variations in length during time-scale modification than unvoiced speech, but the conventional method performs time-scale modification at a uniform rate for all speech. For this purpose, voiced and unvoiced speech duration at various talking speeds were statistically analyzed. The sentences were then spoken at rates of 0.7, 1.3, 1.5 and 1.8 times normal speed. A clipping autocorrelation function was applied to each analysis frame to determine voiced and unvoiced speech to obtain respective variation rates. The results were used to perform variable time-scale modification to produce sentences at rates of 0.7, 1.3, 1.5, 1.8 times normal speed. To evaluate performance, a MOS test was conducted to compare the proposed voiced/unvoiced variable time-scale modification and the uniform SOLA method. Results indicate that the proposed method produces sentence quality superior to that of the conventional method.

  • PDF

웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류 (Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform)

  • 손영호;배건성
    • 음성과학
    • /
    • 제4권2호
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

유/무성음 결정에 따른 가변적인 시간축 변환 (Variable Time-Scale Modification with Voiced/Unvoiced Decision)

  • 손단영
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 제11회 음성통신 및 신호처리 워크샵 논문집 (SCAS 11권 1호)
    • /
    • pp.111-115
    • /
    • 1994
  • In this paper, a variable time-scale modification using SOLA is proposed, which takes into consideration the different time-scaled characteristics of voiced and unvoiced speech. The conventional method performs time-scale modifiction at a uniform rate for all speech. For this purpose, voiced and unvoiced speech duration at various taling speeds were statistically analyzed. A clipping autocorrelation functio was applied to each analysis frame to detemine voiced and unvoiced speech to obtain respective variation rates. The results were used to perform variable time-scale modification to evaluate performance, a MOS test was conducted to compare the proposed voiced/unvoiced variable time-scale modification and the uniform SOLA method. Results indicate that the proposed method produces sentence quality superior to that of the conventional method.

  • PDF

음소 특정 파라미터를 이용한 무성자음 인식 (The Recognition of Unvoiced Consonants Using Characteristic Parameters of the Phonemes)

  • 허만택;이종혁;남기곤;윤태훈;김재창;이양성
    • 전자공학회논문지B
    • /
    • 제31B권4호
    • /
    • pp.175-182
    • /
    • 1994
  • In this study, we present unvoiced consonant recognition system using characteristic parameters of the phoneme of the each syllable. For the recognition, the characteristic parameters on the time domain such as ZCR, total energy of the consonant region and half region energy of the consonant region, and those on the frequency domain such as the frequency spectrum of the transition region are used. The objective unvoiced consonants in this study are /ㄱ/,/ㄷ/,/ㅂ/,/ㅈ/,/ㅋ/,/ㅌ/,/ㅍ/ and /ㅊ/. Each characteristic parameter of two regions extracted from these segmented unvoiced consonants are used for each recognition system of the region, independently, And complementing two outputs of each other system, the final output is to be produced. The recognition system is implemented using MLP which has learning ability. The recognition simulation results for 112 unvoiced consonant samples are that average recognition rates are 96.4$\%$ under 80$\%$ learning rates and 93.7$\%$ under 60$\%$ learning rates.

  • PDF

성문특성 측정을 통한 유/무성음 결정에 관한 연구 (A Study on Decision of Voiced/Unvoiced Region through Measuring the Vocal Cord Property)

  • 민소연;강은영;신동성;배명진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(4)
    • /
    • pp.281-284
    • /
    • 2001
  • Speech is classified into voiced signal and unvoiced signal. Since the amplitude of voiced fall off at about -20dB/decade, dynamic range is often compressed prior to spectral analysis so that details at weak, high frequencies may be visible[5][6] There is a distinct difference in spectrum slope between voiced signal and unvoiced signal. In this paper, we got the slope of each frame by using autocorrelation method, and determined voiced /unvoiced region. Also, we used energy to decide region of silence. To show experimental results, we allot to 1 value in voiced region, -1 value in unvoiced region and 0 value in silence region.

  • PDF

유성음과 무성음의 경계를 이용한 연속 음성의 세그먼테이션 (Segmentation of continuous Korean Speech Based on Boundaries of Voiced and Unvoiced Sounds)

  • 유강주;신욱근
    • 한국정보처리학회논문지
    • /
    • 제7권7호
    • /
    • pp.2246-2253
    • /
    • 2000
  • In this paper, we show that one can enhance the performance of blind segmentation of phoneme boundaries by adopting the knowledge of Korean syllabic structure and the regions of voiced/unvoiced sounds. eh proposed method consists of three processes : the process to extract candidate phoneme boundaries, the process to detect boundaries of voiced/unvoiced sounds, and the process to select final phoneme boundaries. The candidate phoneme boudaries are extracted by clustering method based on similarity between two adjacent clusters. The employed similarity measure in this a process is the ratio of the probability density of adjacent clusters. To detect he boundaries of voiced/unvoiced sounds, we first compute the power density spectrum of speech signal in 0∼400 Hz frequency band. Then the points where this paper density spectrum variation is greater than the threshold are chosen as the boundaries of voiced/unvoiced sounds. The final phoneme boundaries consist of all the candidate phoneme boundaries in voiced region and limited number of candidate phoneme boundaries in unvoiced region. The experimental result showed about 40% decrease of insertion rate compared to the blind segmentation method we adopted.

  • PDF

잡음환경에서 우리말 연속음성의 무성자음 구간 추출 방법 (Extraction of Unvoiced Consonant Regions from Fluent Korean Speech in Noisy Environments)

  • 박정임;하동경;신옥근
    • 한국음향학회지
    • /
    • 제22권4호
    • /
    • pp.286-292
    • /
    • 2003
  • 음성 구간 추출이란 입력된 음성신호를 음성 구간과 묵음, 또는 잡음구간으로 구분하는 과정이다. 잡음이 섞여있는 음성신호의 무성자음 신호는 잡음신호와 매우 유사하다. 따라서 음성 구간을 추출하거나 잡음을 제거 또는 감소시킬 때 무성자음에 특별히 주의하지 않으면 무성자음을 손상시키거나 잘못된 잡음 추정으로 이어질 수 있다. 본 논문에서는 잡음 환경에서 연속음성신호의 음성 구간을 정확하게 추출하기 위해 잡음과 무성자음사이의 경계를 명시적으로 검출함으로써 무성자음의 구간을 추출하는 방법을 제안한다. 제안하는 추출방법은 Hirsch가 잡음 추정을 위해 사용한 히스토그램 방법과 연속된 프레임 사이의 주파수 성분의 유사성을 나타내는 파라미터들을 이용하였다. 제안한 방법의 성능을 평가하기 위해 음성신호에 SNR이 각각 10㏈와 15㏈인 7가지의 잡음을 첨가하여 무성자음신호의 추출 실험을 수행하였다.