• Title/Summary/Keyword: Speaker Variation

Search Result 74, Processing Time 0.026 seconds

Effect of Glottal Wave Shape on the Vowel Phoneme Synthesis (성문파형이 모음음소합성에 미치는 영향)

  • 안점영;김명기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.159-167
    • /
    • 1985
  • It was demonstrated that the glottal waves are different depending on a kind of vowels in deriving the glottal waves directly from Korean vowels/a, e, I, o, u/ w, ch are recorded by a male speaker. After resynthesizing vowels with five simulated glottal waves, the effects of glottal wave shape on the speech synthesis were compared with in terms of waveform. Some changes could be seen in the waveforms of the synthetic vowels with the variation of the shape, opening time and closing time, therefore it was confirmed that in the speech sysnthesis, the glottal wave shape is an important factor in the improvement of the speech quality.

  • PDF

A new method of Extracting the Filter Characteristics of the Nasal Cavity Using Homorganic Nasal-Stop Sequences: A Preliminary Report (동기관음의 스펙트럼 차이를 이용한 비강 특성 산출: 예비 연구)

  • Park, Han-Sang
    • MALSORI
    • /
    • no.53
    • /
    • pp.17-35
    • /
    • 2005
  • A New Method of Extracting the Filter Characteristics of the Nasal Cavity Using Homorganic Nasal-Stop Sequences: A Preliminary R eportHansang ParkThis study provides a new method of extracting the filter characteristics of the nasal cavity. Korean lenis stops are realized as voiced in the homorganic nasal-lenis stop sequences between vowels. Since the only difference between the two members of the homorganic nasal- lenis stop sequences, such as [mb], [nd], and [ g], is whether the passage to the nasal cavity is open or not, the subtraction of the LPC spectrum of the voiced stop from that of the preceding nasal leads to the filter characteristics of the nasal cavity of an individual speaker regardless of place of articulation. The results suggest that various attempts should be made to extract a robust filter characteristics of the nasal cavity by giving variation to LPC coefficients and by paying particular attention to speech samples. This study is significant in that it provides a preliminary report about a new method of extracting the filter characteristics of the nasal cavity.

  • PDF

Stress Patterns of Compound Nouns in English (영어 복합명사의 강세형)

  • Lee Yeong-Kil
    • MALSORI
    • /
    • no.42
    • /
    • pp.25-36
    • /
    • 2001
  • Stress assignment has been much discussed in the literature on English compound nouns. The general view of the stress pattern of English compound nouns is that a main stress falls on the first element and a secondary stress on the second element; however, a stress pattern is often employed that provides counterevidence to the traditional pedagogical approach. A new idea is suggested by Ladd(1984) that 'compound stress represents the deaccenting of the head of the compound.' Recent studies show that initial stressing does not indicate compounds and syntactic phrases are not always characterized by final stressing. In his pilot test Pennanen comments on the frequent variation of stress patterns on individual items, on the basis of which Bauer confirms Pennanen's results with different informants. This paper is an attempt to justify Bauer's analysis with the same data as Bauer's and different subjects. It turns out that the competences of native-speaker informants do not rovide clear-cut answers. Some factors should be taken into account in assigning appropirate stress to compound nouns.

  • PDF

An analysis of illocutionary force types in a dialogue, based on the context and modal information in the ending of a word (문맥 및 종결어미의 서법정보를 이용한 대화문의 화수력 분석)

  • 김영길;최병욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.98-106
    • /
    • 1996
  • This paper proposes an algorithm for analyzing illocutionary force type (IfT)s in a dialogue, based on the context and modal information in the ending of a word. In korean, the variation of an illocutionary force type that represents a speaker's intention frequently occurs at the ending of a word, according to the type of modal information. And in an analysis of speech acts, the modal information illocutionary force types. In this paper, we analyze real dialogue dta, classify the types of illocutionary forces, perform the manual tagging of IFTs and show the freqency of each IFT's occurence. And we also propose an algorithm to extract IFTs, based on the relationship between the analyzed IFTs and the endings of a word. And we use this proposed algorithm to make an experiment on dialogue data and show its efficiency.

  • PDF

Effects of gender, age, and individual speakers on articulation rate in Seoul Korean spontaneous speech

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.19-29
    • /
    • 2018
  • The present study investigated whether there are differences in articulation rate by gender, age, and individual speakers in a spontaneous speech corpus produced by 40 Seoul Korean speakers. This study measured their articulation rates using a second-per-syllable metric and a syllable-per-second metric. The findings are as follows. First, in spontaneous Seoul Korean speech, there was a gender difference in articulation rates only in age group 10-19, among whom men tended to speak faster than women. Second, individual speakers showed variability in their rates of articulation. The tendency for some speakers to speak faster than others was variable. Finally, there were metric differences in articulation rate. That is, regarding the coefficients of variation, the values of the second-per-syllable metric were much higher than those for the syllable-per-second metric. The articulation rate for the syllable-per-second metric tended to be more distinct among individual speakers. The present results imply that data gathered in a corpus of Seoul Korean spontaneous speech may reflect speaker-specific differences in articulatory movements.

Noise-Robust Speech Recognition Using Histogram-Based Over-estimation Technique (히스토그램 기반의 과추정 방식을 이용한 잡음에 강인한 음성인식)

  • 권영욱;김형순
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.6
    • /
    • pp.53-61
    • /
    • 2000
  • In the speech recognition under the noisy environments, reducing the mismatch introduced between training and testing environments is an important issue. Spectral subtraction is widely used technique because of its simplicity and relatively good performance in noisy environments. In this paper, we introduce histogram method as a reliable noise estimation approach for spectral subtraction. This method has advantages over the conventional noise estimation methods in that it does not need to detect non-speech intervals and it can estimate the noise spectra even in time-varying noise environments. Even though spectral subtraction is performed using a reliable average noise spectrum by the histogram method, considerable amount of residual noise remains due to the variations of instantaneous noise spectrum about mean. To overcome this limitation, we propose a new over-estimation technique based on distribution characteristics of histogram used for noise estimation. Since the proposed technique decides the degree of over-estimation adaptively according to the measured noise distribution, it has advantages to be few the influence of the SNR variation on the noise levels. According to speaker-independent isolated word recognition experiments in car noise environment under various SNR conditions, the proposed histogram-based over-estimation technique outperforms the conventional over-estimation technique.

  • PDF

A Study on the Audio Compensation System (음향 보상 시스템에 관한 연구)

  • Jeoung, Byung-Chul;Won, Chung-Sang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.509-517
    • /
    • 2013
  • In this paper, we researched a method that makes a good acoustic-speech system using a digital signal processing technique with dynamic microphone as a transducer. Good acoustic-speech system should deliver the original sound input to electric signal without distortion. By measuring the frequency response of the microphone, adjustment factors are obtained by comparing measured data and standard frequency response of microphone for each frequency band. The final sound levels are obtained using the developed adjustment factors of frequency responses from the microphone and speaker to match the original sound levels using the digital signal processing technique. Then, we minimize the changes in the frequency response and level due to the variation of the distance from source to microphone, where the frequency responses were measured according to the distance changes.

A Study on the Robust Double Talk Detector for Acoustic Echo Cancellation System (음향반항 제거 시스템을 위한 강인한 동시통화 검출기에 관한 연구)

  • 백수진;박규식
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.121-128
    • /
    • 2003
  • Acoustic Echo Cancellation(m) is very active research topic having many applications like teleconference and hands-free communication and it employs Double Talk Detector(DTD) to indicate whether the near-end speaker is active or not. However. the DTD is very sensitive to the variation of acoustical environment and it sometimes provides wrong information about the near-end speaker. In this paper, we are focusing on the development of robust DTD algorithm which is a basic building block for reliable AEC system. The proposed AEC system consists of delayless subband AEC and narrow-band DTD. Delayless subband AEC has proven to have excellent performance of echo cancellation with a low complexity and high convergence speed. In addition, it solves the signal delay problem in the existing subband AEC. On the other hand, the proposed narrowband DTD is operating on low frequency subband. It can take most advantages from the narrow subband such as a low computational complexity due to the down-sampling and the reliable DTD decision making procedure because of the low-frequency nature of the subband signal. From the simulation results of the proposed narrowband DTD and wideband DTD, we confirm that the proposed DTD outperforms the wideband DTD in a sense of removing possible false decision making about the near-end speaker activity.

Robust Speech Recognition Parameters for Emotional Variation (감정 변화에 강인한 음성 인식 파라메터)

  • Kim Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.6
    • /
    • pp.655-660
    • /
    • 2005
  • This paper studied the feature parameters less affected by the emotional variation for the development of the robust speech recognition technologies. For this purpose, the effect of emotional variation on the speech recognition system and robust feature parameters of speech recognition system were studied using speech database containing various emotions. In this study, LPC cepstral coefficient, met-cepstral coefficient, root-cepstral coefficient, PLP coefficient, RASTA met-cepstral coefficient were used as a feature parameters. And CMS and SBR method were used as a signal bias removal techniques. Experimental results showed that the HMM based speaker independent word recognizer using RASTA met-cepstral coefficient :md its derivatives and CMS as a signal bias removal showed the best performance of $7.05\%$ word error rate. This corresponds to about a $52\%$ word error reduction as compare to the performance of baseline system using met - cepstral coefficient.

Robust Speech Recognition using Vocal Tract Normalization for Emotional Variation (성도 정규화를 이용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo;Bang, Hyun-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.773-778
    • /
    • 2009
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, vocal tract normalization method is used to develop the robust speech recognition system for emotional variations. Experimental results from the isolated word recognition using HMM showed that the vocal tract normalization method reduced the error rate of the conventional recognition system by 41.9% when emotional test data was used.