• 제목/요약/키워드: Speech Corpus

검색결과 300건 처리시간 0.032초

음절핵의 위치정보를 이용한 우리말의 음소경계 추출 (Utilization of Syllabic Nuclei Location in Korean Speech Segmentation into Phonemic Units)

  • 신옥근
    • 한국음향학회지
    • /
    • 제19권5호
    • /
    • pp.13-19
    • /
    • 2000
  • 음성신호의 음소경계 추출방법 중 음소에 대한 사전지식 없이 음성 데이타, 혹은 특징벡터의 변화를 감지하여 음소경계를 추출해 내는 맹목 세그먼테이션은 연속음형 인식시스템이나 코퍼스 제작에 중요한 역할을 하며 많은 연구가 진행되어 왔다. 이러한 맹목 세그먼테이션 방법은 사전지식을 필요로 하지 않아 비교적 쉽게 접근할 수 있으나 음운학적인 지식, 또는 음소나 음소경계에 대한 지식과 경험 데이타 등을 이용하는 지식 기반 세그먼테이션 방법에 비해 성능이 좋지 못한 단점이 있다. 본고에서는 우리말의 연속 음성을 맹목 세그먼테이션해서 후보 경계를 추출한 다음, 음절핵의 위치정보를 이용하여 후보 경계를 후처리함으로써 세그먼테이션 효율을 높이는 방법을 제안한다. 제안하는 방법의 전처리과정에서는 확률적인 거리 모델을 이용한 클러스터링 방법을 이용하였으며, 후처리과정에서는 음절의 핵 사이에 위치할 수 있는 음소의 수는 제한된다는 선험적인 지식을 이용하였다. 실험결과, 제안하는 방법을 이용했을 때의 삽입오류는 맹목 세그먼테이션에 비해 약 25% 감소하였다.

  • PDF

TTS DB 압축을 위한 광대역 파형보간 부호기 구현 (Implementation of Wideband Waveform Interpolation Coder for TTS DB Compression)

  • 양희식;한민수
    • 대한음성학회지:말소리
    • /
    • 제55권
    • /
    • pp.143-158
    • /
    • 2005
  • The adequate compression algorithm is essential to achieve high quality embedded TTS system. in this paper, we Propose waveform interpolation coder for TTS corpus compression after many speech coder investigation. Unlike speech coders in communication system, compression rate and anality are more important factors in TTS DB compression than other performance criteria. Thus we select waveform interpolation algorithm because it provides good speech quality under high compression rate at the cost of complexity. The implemented coder has bit rate 6kbps with quality degradation 0.47. The performance indicates that the waveform interpolation is adequate for TTS DB compression with some further study.

  • PDF

자유 발화 자료에서 나타나는 한국어 억양 곡선의 기울기 특성에 대한 연구 (A Study of Intonation Curve Slopes in Korean Spontaneous Speech)

  • 오재혁
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.21-30
    • /
    • 2014
  • This study aims to discuss pitch slope on Korean intonation curve in spontaneous speech data. For this study, 656 utterances were taken in the spoken corpus and used 'close-copy stylization'. And then the physical feature of pitch movements was extracted for the study. The pitch slope was calculated on the basis of time and pitch range in each utterance. As a result, the average and distribution of pitch slope is similar between men and women in the range of the pitch movement except for essential differences. The slope of pitch movement confirms that there are no differences between men and women. Pitch slope on a scale of -10 to 10 is 90% of the entire pitch slope; pitch slope that moves by time scale without curve is 33.1%; pitch slope that moves half of the pitch bandwidth during the average time for pitch movement is 23.4%; pitch slope that moves 100% of pitch bandwidth during a half of the average time for pitch movement is 10.4%. Those results imply the possibility of standardization methods of Korean intonation by pitch slope.

ICA와 DNN을 이용한 방송 드라마 콘텐츠에서 음악구간 검출 성능 (Performance of music section detection in broadcast drama contents using independent component analysis and deep neural networks)

  • 허운행;장병용;조현호;김정현;권오욱
    • 말소리와 음성과학
    • /
    • 제10권3호
    • /
    • pp.19-29
    • /
    • 2018
  • We propose to use independent component analysis (ICA) and deep neural network (DNN) to detect music sections in broadcast drama contents. Drama contents mainly comprise silence, noise, speech, music, and mixed (speech+music) sections. The silence section is detected by signal activity detection. To detect the music section, we train noise, speech, music, and mixed models with DNN. In computer experiments, we used the MUSAN corpus for training the acoustic model, and conducted an experiment using 3 hours' worth of Korean drama contents. As the mixed section includes music signals, it was regarded as a music section. The segmentation error rate (SER) of music section detection was observed to be 19.0%. In addition, when stereo mixed signals were separated into music signals using ICA, the SER was reduced to 11.8%.

N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링 (Spontaneous Speech Language Modeling using N-gram based Similarity)

  • 박영희;정민화
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

음소변동규칙의 발견빈도에 기반한 음성인식 발음사전 구성 (Generating Pronunciation Lexicon for Continuous Speech Recognition Based on Observation Frequencies of Phonetic Rules)

  • 나민수;정민화
    • 대한음성학회지:말소리
    • /
    • 제64호
    • /
    • pp.137-153
    • /
    • 2007
  • The pronunciation lexicon of a continuous speech recognition system should contain enough pronunciation variations to be used for building a search space large enough to contain a correct path, whereas the size of the pronunciation lexicon needs to be constrained for effective decoding and lower perplexities. This paper describes a procedure for selecting pronunciation variations to be included in the lexicon based on the frequencies of the corresponding phonetic rules observed in the training corpus. Likelihood of a phonetic rule's application is estimated using the observation frequency of the rule and is used to control the construction of a pronunciation lexicon. Experiments with various pronunciation lexica show that the proposed method is helpful to improve the speech recognition performance.

  • PDF

Spectral Subtraction Using Spectral Harmonics for Robust Speech Recognition in Car Environments

  • Beh, Jounghoon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제22권2E호
    • /
    • pp.62-68
    • /
    • 2003
  • This paper addresses a novel noise-compensation scheme to solve the mismatch problem between training and testing condition for the automatic speech recognition (ASR) system, specifically in car environment. The conventional spectral subtraction schemes rely on the signal-to-noise ratio (SNR) such that attenuation is imposed on that part of the spectrum that appears to have low SNR, and accentuation is made on that part of high SNR. However, these schemes are based on the postulation that the power spectrum of noise is in general at the lower level in magnitude than that of speech. Therefore, while such postulation is adequate for high SNR environment, it is grossly inadequate for low SNR scenarios such as that of car environment. This paper proposes an efficient spectral subtraction scheme focused specifically to low SNR noisy environment by extracting harmonics distinctively in speech spectrum. Representative experiments confirm the superior performance of the proposed method over conventional methods. The experiments are conducted using car noise-corrupted utterances of Aurora2 corpus.

Vowel Fundamental Frequency in Manner Differentiation of Korean Stops and Affricates

  • Jang, Tae-Yeoub
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.217-232
    • /
    • 2000
  • In this study, I investigate the role of post-consonantal fundamental frequency (F0) as a cue for automatic distinction of types of Korean stops and affricates. Rather than examining data obtained by restricting contexts to a minimum to prevent the interference of irrelevant factors, a relatively natural speaker independent speech corpus is analysed. Automatic and statistical approaches are adopted to annotate data, to minimise speaker variability, and to evaluate the results. In spite of possible loss of information during those automatic analyses, statistics obtained suggest that vowel F0 is a useful cue for distinguishing manners of articulation of Korean non-continuant obstruents having the same place of articulation, especially of lax and aspirated stops and affricates. On the basis of the statistics, automatic classification is attempted over the relevant consonants in a specific context where the micro-prosodic effects appear to be maximised. The results confirm the usefulness of this effect in application for Korean phone recognition.

  • PDF

한국어 억양구의 경계톤 (The Boundary Tones in Korean Intonational Phrases)

  • 한선희;오미라
    • 음성과학
    • /
    • 제5권2호
    • /
    • pp.109-129
    • /
    • 1999
  • A study of boundary tones, which are realized at the final syllable of an Intonational Phrase, is important in that sentential meaning is often differentiated solely by the use of different boundary tones in Korean. The purposes of this paper are three-fold: Firstly, it aims at finding out the different characteristics of boundary tones between designed corpus and natural speech. Secondly, it is to show that gender and dialectal differences are crucial factors in determining different realizations of boundary tones. Finally, this study is to provide a basis for better speech synthesis and speech recognition through the analysis of the morphemes where boundary tones are realized. This study has shown that nine different kinds of boundary tones are realized based on the contextual, gender and dialectal differences. In addition to the boundary tones suggested in Jun (1993), three more boundary toes are introduced: L-%,H-%,LHLH%.

  • PDF

Acoustic correlates of prosodic prominence in conversational speech of American English, as perceived by ordinary listeners

  • Mo, Yoon-Sook
    • 말소리와 음성과학
    • /
    • 제3권3호
    • /
    • pp.19-26
    • /
    • 2011
  • Previous laboratory studies have shown that prosodic structures are encoded in the modulations of phonetic patterns of speech including suprasegmental as well as segmental features. Drawing on a prosodically annotated large-scale speech data from the Buckeye corpus of conversational speech of American English, the current study first evaluated the reliability of prosody annotation by a large number of ordinary listeners and later examined whether and how prosodic prominence influences the phonetic realization of multiple acoustic parameters in everyday conversational speech. The results showed that all the measures of acoustic parameters including pitch, loudness, duration, and spectral balance are increased when heard as prominent. These findings suggest that prosodic prominence enhances the phonetic characteristics of the acoustic parameters. The results also showed that the degree of phonetic enhancement vary depending on the types of the acoustic parameters. With respect to the formant structure, the findings from the present study more consistently support Sonority Expansion Hypothesis than Hyperarticulation Hypothesis, showing that the lexically stressed vowels are hyperarticulated only when hyperarticulation does not interfere with sonority expansion. Taken all into account, the present study showed that prosodic prominence modulates the phonetic realization of the acoustic parameters to the direction of the phonetic strengthening in everyday conversational speech and ordinary listeners are attentive to such phonetic variation associated with prosody in speech perception. However, the present study also showed that in everyday conversational speech there is no single dominant acoustic measure signaling prosodic prominence and listeners must attend to such small acoustic variation or integrate acoustic information from multiple acoustic parameters in prosody perception.

  • PDF