• Title/Summary/Keyword: voiced sound

Search Result 39, Processing Time 0.024 seconds

A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal (LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구)

  • 이희원;나덕수;정찬중;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • Son, Young-Ho;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech (한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리)

  • Hong, Mun-Ki;Shin, Ji-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

A Novel Algorithm for Discrimination of Voiced Sounds (유성음 구간 검출 알고리즘에 관한 연구)

  • Jang, Gyu-Cheol;Woo, Soo-Young;Yoo, Chang-D.
    • Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.35-45
    • /
    • 2002
  • A simple algorithm for discriminating voiced sounds in a speech is proposed. In addition to low-frequency energy and zero-crossing rate (ZCR), both of which have been widely used in the past for identifying voiced sounds, the proposed algorithm incorporates pitch variation to improve the discrimination rate. Based on TIMIT corpus, evaluation result shows an improvement of 13% in the discrimination of voiced phonemes over that of the traditional algorithm using only energy and ZCR.

  • PDF

Improving The Excitation Signal for Low-rate CELP Speech Coding (저전송속도 CELP 부호화기에서 여기신호의 개선)

  • 권철홍
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.136-141
    • /
    • 1998
  • In order to enhance the performance of a CELP coder at low bit rates, it would be necessary to make the CELP excitation have the peaky pulse characteristic. In this paper we introduce an excitation signal with peaky pulse characteristic. It is obtained by using a two-tap pitch predictor. Samples of the signal have different gains according to their amplitudes by the predictor. In voiced sound the signal has the desirable peaky pulse characteristic, and its periodicity is well reproduced. Particularly, peaky pulses at voiced onset and a burst of plosive sound are clearly reconstructed.

  • PDF

L1-L2 Transfer in VOT and f0 Production by Korean English Learners: L1 Sound Change and L2 Stop Production

  • Kim, Mi-Ryoung
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.31-41
    • /
    • 2012
  • Recent studies have shown that the stop system of Korean is undergoing a sound change in terms of the two acoustic parameters, voice onset time (VOT) and fundamental frequency (f0). Because of a VOT merger of a consonantal opposition and onset-f0 interaction, the relative importance of the two parameters has been changing in Korean where f0 is a primary cue and VOT is a secondary cue in distinguishing lax from aspirated stops in speech production as well as perception. In English, however, VOT is a primary cue and f0 is a secondary cue in contrasting voiced and voiceless stops. This study examines how Korean English learners use the two acoustic parameters of L1 in producing L2 English stops and whether the sound change of acoustic parameters in L1 affects L2 speech production. The data were collected from six adult Korean English learners. Results show that Korean English learners use not only VOT but also f0 to contrast L2 voiced and voiceless stops. However, unlike VOT variations among speakers, the magnitude effect of onset consonants on f0 in L2 English was steady and robust, indicating that f0 also plays an important role in contrasting the [voice] contrast in L2 English. The results suggest that the important role of f0 in contrasting lax and aspirated stops in L1 Korean is transferred to the contrast of voiced and voiceless stops in L2 English. The results imply that, for Korean English learners, f0 rather than VOT will play an important perceptual cue in contrasting voiced and voiceless stops in L2 English.

Voice onset time in English and Korean stops with respect to a sound change

  • Kim, Mi-Ryoung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.9-17
    • /
    • 2021
  • Voice onset time (VOT) is known to be a primary acoustic cue that differentiates voiced from voiceless stops in the world's languages. While much attention has been given to the sound change of Korean stops, little attention has been given to that of English stops. This study examines VOT of stop consonants as produced by English speakers in comparison to Korean speakers to see whether there is any VOT change for English stops and how the effects of stop, place, gender, and individual on VOT differ cross-linguistically. A total of 24 native speakers (11 Americans and 13 Koreans) participated in this experiment. The results showed that, for Korean, the VOT merger of lax and aspirated stops was replicated, and, for English, voiced stops became initially devoiced and voiceless stops became heavily aspirated. English voiceless stops became longer in VOT than Korean counterparts. The results suggest that, similar to Korean stops, English stops may also undergo a sound change. Since it is the first study to be revealed, more convincing evidence is necessary.

A Study on the Simple Algorithm for Discrimination of Voiced Sounds (유성음 구간 검출을 위한 간단한 알고리즘에 관한 연구)

  • 장규철;우수영;박용규;유창동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.8
    • /
    • pp.727-734
    • /
    • 2002
  • A simple algorithm for discriminating voiced sounds in a speech is proposed in this paper. In addition to low-frequency energy and zero-crossing rate (ZCR), both of which have been widely used in the past for identifying voiced sounds, the proposed algorithm incorporates pitch variation to improve the discrimination rate. Based on TIMIT corpus, evaluation result shows an improvement of 13% in the discrimination of voiced phonemes over that of the traditional algorithm using only energy and ZCR.

Using Korean Phonetic Alphabet (KPA) in Teaching English Stop Sounds to Koreans

  • Jo, Un-Il
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.165-165
    • /
    • 2000
  • In the phoneme level, English stop sounds are classified with the feature of 'voicing': voiceless and voiced (p/b, t/d, k/g). But when realized, a voiceless stop is not alwats the same sound. For example, the two 'p' sounds in 'people' are different. The former is pronounced with much aspiration, while the latter without it. This allophonic differnece between [$P^h$] and [p] out of an English phoneme /p/ can be well explained to Koreans because in Korean these two sounds exist as two different phonemes {/ㅍ/ and /ㅃ/ respectively). But difficulties lie in teaching the English voiced stop sounds (/b, d, g/) to Koreans because in Korean voiced stops do not exist as phonemes but as allophones of lenis sounds (/ㅂ, ㄷ, ㄱ/). For example, the narrow transcription of '바보' (a fool) is [baboo]. In the word initial position, Korean lenis stops are pronounced voiceless and even with a slight aspiration while in the inrervocalic environments they become voiced, That is in Korean voiced stops do not occur independently and neither they have their own letters. To explain all these more effectively to Koreans, it is very helpful to use Korean Phenetic Alphabet (KPA) which is devised by Dr. LEE Hyunbok (a professor of phonetics at Seoul National Univ. and chairman of Phonetic Society of Koera.)(omitted)

  • PDF

An Analysis of Homorganic Cluster Lengthening in Late Old English (후기 고대영어의 동질군 장모음화 분석)

  • Kwon, Young-Kook
    • Journal of English Language & Literature
    • /
    • v.55 no.4
    • /
    • pp.719-744
    • /
    • 2009
  • This paper aims to reexamine Homorganic Cluster Lengthening in Late Old English whereby OE short vowels became lengthened before specific consonant clusters such as /-ld, -nd, -mb, -rd, -rð, -ng, -rz/. As for the motivation for this apparently odd-looking sound change, I propose that it was the result of phonologization of the phonetic lengthening of syllables containing resonants homorganic with a following voiced obstruent. Adopting Luick's (1898) view of "resonant+voiced homorganic obstruent" phonologically as a single coda, I show that Homorganic Cluster Lengthening is in fact a natural sound change that can be explained with the proper postulation of a few quantity-related universal constraints within the framework of the Optimality Theory. The fact that the constraints and their ranking as posited in this paper can also account for Pre-Cluster Shortening points to the validity of my approach in the analysis of other quantity changes in Middle English.