• Title/Summary/Keyword: speech waveform

Search Result 135, Processing Time 0.025 seconds

A Comparative Study of Glottal Data from Normal Adults Using Two Laryngographs

  • Yang, Byung-Gon;Wang, Soo-Geun;Kwon, Soon-Bok
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.15-25
    • /
    • 2003
  • A laryngograph was developed to measure the open and closed movements of vocal folds in our laboratory. This study attempted to evaluate its performance by comparing its glottal data with that of the original laryngograph. Ten normal Korean adults Participated in the experiment. Each subject produced a sustained vowel /a/ for about five seconds. This study compared f0 values, contact quotients of the duration of closed vocal folds over one glottal pulse, and area quotients of the closed over open vocal folds derived from glottal waves using both the original and new laryngographs. Results showed that the mean and standard deviation of the two laryngographs were almost comparable with a correlation coefficient 0.662 but minor systematic shift below those of the original laryngograph was observed. The absolute mean difference converged into 1 Hz, which indicates a possibility of adopting some threshold of rejecting inappropriate pitch values beyond a threshold value. The contact quotient of the normal subjects came out slightly over the 50% in a citation speech. Finally, the area quotient converged into 1. We will pursue further studies on the abnormal patients in the future.

  • PDF

A Speech Translation System for Hotel Reservation (호텔예약을 위한 음성번역시스템)

  • 구명완;김재인;박상규;김우성;장두성;홍영국;장경애;김응인;강용범
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.24-31
    • /
    • 1996
  • In this paper, we present a speech translation system for hotel reservation, KT_STS(Korea Telecom Speech Translation System). KT-STS is a speech-to-speech translation system which translates a spoken utterance in Korean into one in Japanese. The system has been designed around the task of hotel reservation(dialogues between a Korean customer and a hotel reservation de나 in Japan). It consists of a Korean speech recognition system, a Korean-to-Japanese machine translation system and a korean speech synthesis system. The Korean speech recognition system is an HMM(Hidden Markov model)-based speaker-independent, continuous speech recognizer which can recognize about 300 word vocabularies. Bigram language model is used as a forward language model and dependency grammar is used for a backward language model. For machine translation, we use dependency grammar and direct transfer method. And Korean speech synthesizer uses the demiphones as a synthesis unit and the method of periodic waveform analysis and reallocation. KT-STS runs in nearly real time on the SPARC20 workstation with one TMS320C30 DSP board. We have achieved the word recognition rate of 94. 68% and the sentence recognition rate of 82.42% after the speech recognition tests. On Korean-to-Japanese translation tests, we achieved translation success rate of 100%. We had an international joint experiment in which our system was connected with another system developed by KDD in Japan using the leased line.

  • PDF

Intonation Training System (Visual Analysis Tool) and the application of French Intonation for Korean Learners (컴퓨터를 이용한 억양 교육 프로그램 개발 : 프랑스어 억양 교육을 중심으로)

  • Yu, Chang-Kyu;Son, Mi-Ra;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1999
  • This study is concerned with the educational program Visual Analysis Tool (VAT) for sound development for foreign intonation using personal computer. The VAT can run on IBM-PC 386 compatible or higher. It shows the spectrogram, waveform, intensity and the pitch contour. The system can work freely on either waveform zoom in-out or the documentation of measured value. In this paper, intensity and pitch contour information were used. Twelve French sentences were recorded from a French conversational tape. And three Korean participated in this study. They spoke out twelve sentences repeatly and trid to make the same pitch contour - by visually matching their pitcgh contour to the native speaker's. A sentences were recorded again when the participants themselves became familiar with intonation, intensity and pauses. The difference of pitch contour(rising or falling), pitch value, energy, total duration of sentences and the boundary of rhythmic group between native speaker's and theirs before and after training were compared. The results were as following: 1) In a declarative sentence: a native speaker's general pitch contour falls at the end of sentences. But the participant's pitch contours were flat before training. 2) In an interrogative: the native speaker made his pitch contours it rise at the end of sentences with the exception of wh-questions (qu'est-ce que) and a pitch value varied a greath. In the interrogative 'S + V' form sentences, we found the pitch contour rose higher in comparison to other sentences and it varied a great deal. 3) In an exclamatory sentence: the pitch contour looked like a shape of a mountain. But the participants could not make it fall before or after training.

  • PDF

A Very Low-Bit-Rate Analysis-by-Synthesis Speech Coder Using Zinc Function Excitation (Zinc 함수 여기신호를 이용한 분석-합성 구조의 초 저속 음성 부호화기)

  • Seo Sang-Won;Kim Jong-Hak;Lee Chang-Hwan;Jeong Gyu-Hyeok;Lee In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.282-290
    • /
    • 2006
  • This paper proposes a new Digital Reverberator that models Analog Helical Coil Spring Reverberator for guitar amplifiers. While the conventional digital reverberators are proposed to provide better sound field mainly based on room acoustics, no algorithm or analysis of digital reverberators those model Helical Coil Spring Reverberator was proposed. Considering the fact that approximately $70{\sim}80$ percent of guitar amplifiers are still with Helical Coil Spring Reverberator, research was performed based not on Room Acoustics but on Helical Coil Spring Reverberator itself as an effector. After performing simulations with proposed algorithm, it was confirmed that the Digital Reverberator by proposed algorithm provides perceptually equivalent response to the conventional Analog Helical Coil Spring Reverberators.

A Study on 8kbps FBD-MPC Method Considering Low Bit Rate (Low Bit Rate을 고려한 8kbps FBD-MPC 방식에 관한 연구)

  • Lee, See-Woo
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.271-276
    • /
    • 2014
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech quality in case coexist with a voiced and unvoiced consonants in a frame. In this paper, I propose a method of 8kbps Multi-Pulse Speech Coding(FBD-MPC: Frequency Band Division MPC) by using TSIUVC(Transition Segment Including Unvoiced Consonant) searching, extraction and approximation-synthesis method in a frequency domain. I evaluate the 8kbps MPC and FBD-MPC. As a result, SNRseg of FBD-MPC was improved 0.5dB for female voice and 0.2dB for male voice respectively. Compared to the MPC, SNRseg of FBD-MPC has been improved that I was able to control the distortion of the speech waveform finally. And so, I expect to be able to this method for cellular phone and smart phone using excitation source of low bit rate.

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

A Comparison Study on the Speech Signal Parameters for Chinese Leaners' Korean Pronunciation Errors - Focused on Korean /ㄹ/ Sound (중국인 학습자의 한국어 발음 오류에 대한 음성 신호 파라미터들의 비교 연구 - 한국어의 /ㄹ/ 발음을 중심으로)

  • Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.239-246
    • /
    • 2017
  • This paper compares the speech signal parameters between Korean and Chinese for Korean pronunciation /ㄹ/, which is caused many errors by Chinese leaners. Allophones of /ㄹ/ in Korean is divided into lateral group and tap group. It has been investigated the reasons for these errors by studying the similarity and the differences between Korean /ㄹ/ pronunciation and its corresponding Chinese pronunciation. In this paper, for the purpose of comparison the speech signal parameters such as energy, waveform in time domain, spectrogram in frequency domain, pitch based on ACF, Formant frequencies are used. From the phonological perspective the speech signal parameters such as signal energy, a waveform in the time domain, a spectrogram in the frequency domain, the pitch (F0) based on autocorrelation function (ACF), Formant frequencies (f1, f2, f3, and f4) are measured and compared. The data, which are composed of the group of Korean words by through a philological investigation, are used and simulated in this paper. According to the simulation results of the energy and spectrogram, there are meaningful differences between Korean native speakers and Chinese leaners for Korean /ㄹ/ pronunciation. The simulation results also show some differences even other parameters. It could be expected that Chinese learners are able to reduce the errors considerably by exploiting the parameters used in this paper.

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System (음성인식을 위한 혼돈시스템 특성기반의 종단탐색 기법)

  • Zang, Xian;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.5
    • /
    • pp.8-14
    • /
    • 2009
  • In the research field of speech recognition, pinpointing the endpoints of speech utterance even with the presence of background noise is of great importance. These noise present during recording introduce disturbances which complicates matters since what we just want is to get the stationary parameters corresponding to each speech section. One major cause of error in automatic recognition of isolated words is the inaccurate detection of the beginning and end boundaries of the test and reference templates, thus the necessity to find an effective method in removing the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two linear time-domain measurements: the short-time energy, and short-time zero-crossing rate. They perform well for clean speech but their precision is not guaranteed if there is noise present, since the high energy and zero-crossing rate of the noise is mistaken as a part of the speech uttered. This paper proposes a novel approach in finding an apparent threshold between noise and speech based on Lyapunov Exponents (LEs). This proposed method adopts the nonlinear features to analyze the chaos characteristics of the speech signal instead of depending on the unreliable factor-energy. The excellent performance of this approach compared with the conventional methods lies in the fact that it detects the endpoints as a nonlinearity of speech signal, which we believe is an important characteristic and has been neglected by the conventional methods. The proposed method extracts the features based only on the time-domain waveform of the speech signal illustrating its low complexity. Simulations done showed the effective performance of the Proposed method in a noisy environment with an average recognition rate of up 92.85% for unspecified person.

A study on loss combination in time and frequency for effective speech enhancement based on complex-valued spectrum (효과적인 복소 스펙트럼 기반 음성 향상을 위한 시간과 주파수 영역 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.38-44
    • /
    • 2022
  • Speech enhancement is performed to improve intelligibility and quality of the noise-corrupted speech. In this paper, speech enhancement performance was compared using different loss functions in time and frequency domains. This study proposes a combination of loss functions to utilize advantage of each domain by considering both the details of spectrum and the speech waveform. In our study, Scale Invariant-Source to Noise Ratio (SI-SNR) is used for the time domain loss function, and Mean Squared Error (MSE) is used for the frequency domain, which is calculated over the complex-valued spectrum and magnitude spectrum. The phase loss is obtained using the sin function. Speech enhancement result is evaluated using Source-to-Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI). In order to confirm the result of speech enhancement, resulting spectrograms are also compared. The experimental results over the TIMIT database show the highest performance when using combination of SI-SNR and magnitude loss functions.

Phoneme Separation and Establishment of Time-Frequency Discriminative Pattern on Korean Syllables (음절신호의 음소 분리와 시간-주파수 판별 패턴의 설정)

  • 류광열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1324-1335
    • /
    • 1991
  • In this paper, a phoneme separation and an establishment of discriminative pattern of Korean phonemes are studied on experiment. The separation uses parameters such as pitch extraction, glottal peak pulse width of each pitch. speech duration. envelope and amplitude bias. The first pitch is extracted by deviations of glottal peak and width. energy and normalization on a bias on the top of vowel envelope. And then, it traces adjacent pitch to vowel in whole. On vewel, amethod to be reduced gliding pattern and the possible of vowel distinction to be used just second formant are proposed, and shrinking pitch waveform has nothing to do with pitch length is estimated. A pattern of envelope, spectrum, shrinking waveform, and a method of analysis by mutual relation among phonemes and manners of articulation on consonant are detected. As experimental results, 90% on vowel phoneme, 80% and 60% on initial and final consonant are discriminated.

  • PDF