• Title/Summary/Keyword: Speech Signals

Search Result 499, Processing Time 0.024 seconds

Sentence Type Identification in Korean Applications to Korean-Sign Language Translation and Korean Speech Synthesis (한국어 문장 유형의 자동 분류 한국어-수화 변환 및 한국어 음성 합성에의 응용)

  • Chung, Jin-Woo;Lee, Ho-Joon;Park, Jong-C.
    • Journal of the HCI Society of Korea
    • /
    • v.5 no.1
    • /
    • pp.25-35
    • /
    • 2010
  • This paper proposes a method of automatically identifying sentence types in Korean and improving naturalness in sign language generation and speech synthesis using the identified sentence type information. In Korean, sentences are usually categorized into five types: declarative, imperative, propositive, interrogative, and exclamatory. However, it is also known that these types are quite ambiguous to identify in dialogues. In this paper, we present additional morphological and syntactic clues for the sentence type and propose a rule-based procedure for identifying the sentence type using these clues. The experimental results show that our method gives a reasonable performance. We also describe how the sentence type is used to generate non-manual signals in Korean-Korean sign language translation and appropriate intonation in Korean speech synthesis. Since the method of using sentence type information in speech synthesis and sign language generation is not much studied previously, it is anticipated that our method will contribute to research on generating more natural speech and sign language expressions.

  • PDF

The usefulness of the depth images in image-based speech synthesis (영상 기반 음성합성에서 심도 영상의 유용성)

  • Ki-Seung Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.1
    • /
    • pp.67-74
    • /
    • 2023
  • The images acquired from the speaker's mouth region revealed the unique patterns according to the corresponding voices. By using this principle, the several methods were proposed in which speech signals were recognized or synthesized from the images acquired at the speaker's lower face. In this study, an image-based speech synthesis method was proposed in which the depth images were cooperatively used. Since depth images yielded depth information that cannot be acquired from optical image, it can be used for the purpose of supplementing flat optical images. In this paper, the usefulness of depth images from the perspective of speech synthesis was evaluated. The validation experiment was carried out on 60 Korean isolated words, it was confirmed that the performance in terms of both subjective and objective evaluation was comparable to the optical image-based method. When the two images were used in combination, performance improvements were observed compared with when each image was used alone.

Analysis and Synthesis of Audio Signals using a Sinusoidal Model with Psychoacoustic Criteria (정현파 모델을 이용한 오디오 신호의 심리음향적 분석 및 합성)

  • 남승현;강경옥;홍진우
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.77-82
    • /
    • 1999
  • A sinusoidal model has been widely used in the analysis and synthesis of speech and audio signals, and becomes one of the efficient candidates for high quality low bit rate audio coders. One of the crucial steps in the analysis and synthesis using a sinusoidal model is the detection of tonal components. This paper proposes an efficient method for the analysis and synthesis of audio signals using a sinusoidal model, which uses psychoacoustic criteria such as masking effect, masking index, and JNDf(Just Noticeable Difference in Frequency). Simulation results show that the proposed method reduces the number of sinusoids significantly without degrading the quality of the synthesized audio signals.

  • PDF

A New Wideband Speech/Audio Coder Interoperable with ITU-T G.729/G.729E (ITU-T G.729/G.729E와 호환성을 갖는 광대역 음성/오디오 부호화기)

  • Kim, Kyung-Tae;Lee, Min-Ki;Youn, Dae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.81-89
    • /
    • 2008
  • Wideband speech, characterized by a bandwidth of about 7 kHz (50-7000 Hz), provides a substantial quality improvement in terms of naturalness and intelligibility. Although higher data rates are required, it has extended its application to audio and video conferencing, high-quality multimedia communications in mobile links or packet-switched transmissions, and digital AM broadcasting. In this paper, we present a new bandwidth-scalable coder for wideband speech and audio signals. The proposed coder spits 8kHz signal bandwidth into two narrow bands, and different coding schemes are applied to each band. The lower-band signal is coded using the ITU-T G.729/G.729E coder, and the higher-band signal is compressed using a new algorithm based on the gammatone filter bank with an invertible auditory model. Due to the split-band architecture and completely independent coding schemes for each band, the output speech of the decoder can be selected to be a narrowband or wideband according to the channel condition. Subjective tests showed that, for wideband speech and audio signals, the proposed coder at 14.2/18 kbit/s produces superior quality to ITU-T 24 kbit/s G.722.1 with the shorter algorithmic delay.

Corpus-based Korean Text-to-speech Conversion System (콜퍼스에 기반한 한국어 문장/음성변환 시스템)

  • Kim, Sang-hun; Park, Jun;Lee, Young-jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.24-33
    • /
    • 2001
  • this paper describes a baseline for an implementation of a corpus-based Korean TTS system. The conventional TTS systems using small-sized speech still generate machine-like synthetic speech. To overcome this problem we introduce the corpus-based TTS system which enables to generate natural synthetic speech without prosodic modifications. The corpus should be composed of a natural prosody of source speech and multiple instances of synthesis units. To make a phone level synthesis unit, we train a speech recognizer with the target speech, and then perform an automatic phoneme segmentation. We also detect the fine pitch period using Laryngo graph signals, which is used for prosodic feature extraction. For break strength allocation, 4 levels of break indices are decided as pause length and also attached to phones to reflect prosodic variations in phrase boundaries. To predict the break strength on texts, we utilize the statistical information of POS (Part-of-Speech) sequences. The best triphone sequences are selected by Viterbi search considering the minimization of accumulative Euclidean distance of concatenating distortion. To get high quality synthesis speech applicable to commercial purpose, we introduce a domain specific database. By adding domain specific database to general domain database, we can greatly improve the quality of synthetic speech on specific domain. From the subjective evaluation, the new Korean corpus-based TTS system shows better naturalness than the conventional demisyllable-based one.

  • PDF

A realization of pauses in utterance across speech style, gender, and generation (과제, 성별, 세대에 따른 휴지의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.33-44
    • /
    • 2019
  • This paper dealt with how realization of pauses in utterance is affected by speech style, gender, and generation. For this purpose, we analyzed the frequency and duration of pauses. Pauses were categorized into four types: pause with breath, pause with no breath, utterance medial pause, and utterance final pause. Forty-eight subjects living in Seoul were chosen from the Korean Standard Speech Database. All subjects engaged in reading and spontaneous speech, through which we could also compare the realization between the two speech styles. The results showed that utterance final pauses had longer durations than utterance medial pauses. It means that utterance final pause has a function that signals the end of an utterance to the audience. For difference between tasks, spontaneous speech had longer and more frequent pauses because of cognitive reasons. With regard to gender variables, women produced shorter and less frequent pauses. For male speakers, the duration of pauses with breath was significantly longer. Finally, for generation variable, older speakers produced more frequent pauses. In addition, the results showed several interaction effects. Male speakers produced longer pauses, but this gender effect was more prominent at the utterance final position.

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

Consonant/Vowel Segmentation in Monosyllabic Speech Data Using the Fractal Dimension (프랙탈 차원을 이용한 단음절 음성의 자$\cdot$모음 분리)

  • Choi, Chul-Young;Kim, Hyung-Soon;Kim, Jae-Ho;Son, Kyung-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.3
    • /
    • pp.51-62
    • /
    • 1994
  • In this paper, we performed a class of experiments on segmenting consonant and vowel from Korean consonant-vowel (CV) monosyllable data, using the fractal dimension of the speech signals. We chose the Minkowski-Bouligand dimension as the fractal dimension, and computed it using the morphological covering method. In order to examine the usefulness of the fractal dimension in speech segmentation we carried out speech segmentation experiments using the fractal dimension alone, using the short-time energy alone, and using both the fractal dimension and the short-time energy, and compared the results. From the experiments, segmentation accuracy of $96.1\%$ was achieved for the case with using the multiplication of the slope of the fractal dimension and that of the energy, while the segmentation accuracies for the cases with using the slope of either the fractal dimension or energy alone were slightly lower $(93.6\%)$ or much lower $(88.0\%)$ than the above case, respectively. These results indicate that the fractal dimension can be used as a good parameter for speech segmentation.

  • PDF

On a Pitch Alteration Method using Scaling the Harmonics Compensated with the Phase for Speech Synthesis (위상 보상된 고조파 스케일링에 의한 음성합성용 피치변경법)

  • Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.91-97
    • /
    • 1994
  • In speech processing, the waveform codings are concerned with simply preserving the waveform of signal through a redundancy reduction process. In the case of speech synthesis, the waveform codings with high quality are mainly used to the synthesis by analysis. Because the parameters of this coding are not classified as both excitation and vocal tract, it is difficult to apply the waveform coding to the synthesis by rule. Thus, in order to apply the waveform coding to synthesis by rule, it is necessary to alter the pitches. In this paper, we proposed a new pitch alteration method that can change the pitch period in waveform coding by dividing the speech signals into the vocal tract and excitation parameters. This method is a time-frequency domain method preserving the phase component of the waveform in time domain and the magnitude component in frequency domain. Thus, it is possible that the waveform coding is carried out the synthesis by rule in speech processing. In case of using the algorithm, we can obtain spectrum distortion with $2.94\%$. That is, the spectrum distortion is decreased more $5.06\%$ than that of the pitch alteration method in time domain.

  • PDF

A New Pitch Detection Method Using The WRLS-VFF-VT Algorithm (WRLS-VFF-VT 알고리듬을 이용한 새로운 피치 검출 방법)

  • Lee, Kyo-Sik;Park, Kyu-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2725-2736
    • /
    • 1998
  • In this paper. we present a new pitch determination method for speech analysis. namely VFF(Variable Forgetting Factor) based. by using the WRLS-VFF-VT(Weighted Recursive Least Square-Variable Forgetting Factor-Variable Threshold) algorithm. A proposed method uses VFF to identify the glottal closure points which correspond to the instants of the main excitation pulses for voiced speech. The modified EGG

  • PDF