• Title/Summary/Keyword: phonetic data

Search Result 200, Processing Time 0.027 seconds

Efficient Triphone Clustering Using Monophone Distance (모노폰 거리를 이용한 트라이폰 클러스터링 방법 연구)

  • Bang Kyu-Seop;Yook Dong-Suk
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.41-44
    • /
    • 2006
  • The purpose of state tying is to reduce the number of models and to use relatively reliable output probability distributions. There are two approaches: one is top down clustering and the other is bottom up clustering. For seen data, the performance of bottom up approach is better than that of top down approach. In this paper, we propose a new clustering technique that can enhance the undertrained triphone clustering performance. The basic idea is to tie unreliable triphones before clustering. An unreliable triphone is the one that appears in the training data too infrequently to train the model accurately. We propose to use monophone distance to preprocess these unreliable triphones. It has been shown in a pilot experiment that the proposed method reduces the error rate significantly.

  • PDF

On Reaction Signals

  • Hatanaka, Takami
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.301-311
    • /
    • 2000
  • The purpose of this paper is to explore the use of reaction signals by Japanese and English speakers. After collecting data from Japanese and English speakers, American and British, I checked them and decided to be concerned with five of them: ah, eh, oh, m, and ${\partial}:m$. At first I thought that the first three of them resembled in form and in their tones and meanings, while the others occur frequently only in English. But as I was reading the data more in detail I found the reason for too frequent use of the signal eh by Japanese. It is also found that the signal eh is a kind of substitute for a real word, the similar linguistic phenomenon is seen in the use of m, and m seems to be different from ${\partial}:m$ in its function, according to whether the speaker is talkative or not. And American students learning Japanese started their Japanese with an English reaction signal and the reverse phenomenon was found with Japanese students speaking in English, so much so that reaction signals are used spontaneously, though they have various tones and meanings.

  • PDF

Classification of Phornographic Videos Based on the Audio Information (오디오 신호에 기반한 음란 동영상 판별)

  • Kim, Bong-Wan;Choi, Dae-Lim;Lee, Yong-Ju
    • MALSORI
    • /
    • no.63
    • /
    • pp.139-151
    • /
    • 2007
  • As the Internet becomes prevalent in our lives, harmful contents, such as phornographic videos, have been increasing on the Internet, which has become a very serious problem. To prevent such an event, there are many filtering systems mainly based on the keyword-or image-based methods. The main purpose of this paper is to devise a system that classifies pornographic videos based on the audio information. We use the mel-cepstrum modulation energy (MCME) which is a modulation energy calculated on the time trajectory of the mel-frequency cepstral coefficients (MFCC) as well as the MFCC as the feature vector. For the classifier, we use the well-known Gaussian mixture model (GMM). The experimental results showed that the proposed system effectively classified 98.3% of pornographic data and 99.8% of non-pornographic data. We expect the proposed method can be applied to the more accurate classification system which uses both video and audio information.

  • PDF

Fast Speaker Adaptation and Environment Compensation Based on Eigenspace-based MLLR (Eigenspace-based MLLR에 기반한 고속 화자적응 및 환경보상)

  • Song Hwa-Jeon;Kim Hyung-Soon
    • MALSORI
    • /
    • no.58
    • /
    • pp.35-44
    • /
    • 2006
  • Maximum likelihood linear regression (MLLR) adaptation experiences severe performance degradation with very tiny amount of adaptation data. Eigenspace- based MLLR, as an alternative to MLLR for fast speaker adaptation, also has a weak point that it cannot deal with the mismatch between training and testing environments. In this paper, we propose a simultaneous fast speaker and environment adaptation based on eigenspace-based MLLR. We also extend the sub-stream based eigenspace-based MLLR to generalize the eigenspace-based MLLR with bias compensation. A vocabulary-independent word recognition experiment shows the proposed algorithm is superior to eigenspace-based MLLR regardless of the amount of adaptation data in diverse noisy environments. Especially, proposed sub-stream eigenspace-based MLLR with bias compensation yields 67% relative improvement with 10 adaptation words in 10 dB SNR environment, in comparison with the conventional eigenspace-based MLLR.

  • PDF

Prediction of Break Indices in Korean Read Speech (국어 낭독체 발화의 운율경계 예측)

  • Kim Hyo Sook;Kim Chung Won;Kim Sun Ju;Kim Seoncheol;Kim Sam Jin;Kwon Chul Hong
    • MALSORI
    • /
    • no.43
    • /
    • pp.1-9
    • /
    • 2002
  • This study aims to model Korean prosodic phrasing using CART(classification and regression tree) method. Our data are limited to Korean read speech. We used 400 sentences made up of editorials, essays, novels and news scripts. Professional radio actress read 400sentences for about two hours. We used K-ToBI transcription system. For technical reason, original break indices 1,2 are merged into AP. Differ from original K-ToBI, we have three break index Zero, AP and IP. Linguistic information selected for this study is as follows: the number of syllables in ‘Eojeol’, the location of ‘Eojeol’ in sentence and part-of-speech(POS) of adjacent ‘Eojeol’s. We trained CART tree using above information as variables. Average accuracy of predicting NonIP(Zero and AP) and IP was 90.4% in training data and 88.5% in test data. Average prediction accuracy of Zero and AP was 79.7% in training data and 78.7% in test data.

  • PDF

Irregular Pronunciation Detection for Korean Point-of-Interest Data Using Prosodic Word

  • Kim Sun-Hee;Jeon Je-Hun;Na Min-Soo;Chung Min-Hwa
    • MALSORI
    • /
    • no.57
    • /
    • pp.123-137
    • /
    • 2006
  • This paper aims to propose a method of detecting irregular pronunciations for Korean POI data adopting the notion of the Prosodic Word based on the Prosodic Phonology (Selkirk 1984, Nespor and Vogel 1986) and Intonational Phonology (Jun 1996). In order to show the performance of the proposed method, the detection experiment was conducted on the 250,000 POI data. When all the data were trained, 99.99% of the exceptional prosodic words were detected, which shows the stability of the system. The results show that similar ratio of exceptional prosodic words (22.4% on average) were detected on each stage where a certain amount of the training data were added. Being intended to be an example of an interdisciplinary study of linguistics and computer science, this study will, on the one hand, provide an understanding of Korean language from the phonological point of view, and, on the other hand, enable a systematic development of a multiple pronunciation lexicon for Korean TTS or ASR systems of high performance.

  • PDF

Improved Decision Tree-Based State Tying In Continuous Speech Recognition System (연속 음성 인식 시스템을 위한 향상된 결정 트리 기반 상태 공유)

  • ;Xintian Wu;Chaojun Liu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.49-56
    • /
    • 1999
  • In many continuous speech recognition systems based on HMMs, decision tree-based state tying has been used for not only improving the robustness and accuracy of context dependent acoustic modeling but also synthesizing unseen models. To construct the phonetic decision tree, standard method performs one-level pruning using just single Gaussian triphone models. In this paper, two novel approaches, two-level decision tree and multi-mixture decision tree, are proposed to get better performance through more accurate acoustic modeling. Two-level decision tree performs two level pruning for the state tying and the mixture weight tying. Using the second level, the tied states can have different mixture weights based on the similarities in their phonetic contexts. In the second approach, phonetic decision tree continues to be updated with training sequence, mixture splitting and re-estimation. Multi-mixture Gaussian as well as single Gaussian models are used to construct the multi-mixture decision tree. Continuous speech recognition experiment using these approaches on BN-96 and WSJ5k data showed a reduction in word error rate comparing to the standard decision tree based system given similar number of tied states.

  • PDF

A Comparative Study of the Speech Signal Parameters for the Consonants of Pyongyang and Seoul Dialects - Focused on "ㅅ/ㅆ" (평양 지역어와 서울 지역어의 자음에 대한 음성신호 파라미터들의 비교 연구 - "ㅅ/ ㅆ"을 중심으로)

  • So, Shin-Ae;Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.6
    • /
    • pp.927-937
    • /
    • 2018
  • In this paper the comparative study of the consonants of Pyongyang and Seoul dialects of Korean is performed from the perspective of the signal processing which can be regarded as the basis of engineering applications. Until today, the most of speech signal studies were primarily focused on the vowels which are playing important role in the language evolution. In any language, however, the number of consonants is greater than the number of vowels. Therefore, the research of consonants is also important. In this paper, with the vowel study of the Pyongyang dialect, which was conducted by phonological research and experimental phonetic methods, the consonant studies are processed based on an engineering operation. The alveolar consonant, which has demonstrated many differences in the phonetic value between Pyongyang and Seoul dialects, was used as the experimental data. The major parameters of the speech signal analysis - formant frequency, pitch, spectrogram - are measured. The phonetic values between the two dialects were compared with respect to /시/ and /씨/ of Korean language. This study can be used as the basis for the voice recognition and the voice synthesis in the future.

Use of Acoustic Analysis for Indivisualised Therapeutic Planning and Assessment of Treatment Effect in the Dysarthric Children (조음장애 환아에서 개별화된 치료계획 수립과 효과 판정을 위한 음향음성학적 분석방법의 활용)

  • Kim, Yun-Hee;Yu, Hee;Shin, Seung-Hun;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.19-35
    • /
    • 2000
  • Speech evaluation and treatment planning for the patients with articulation disorders have traditionally been based on perceptual judgement by speech pathologists. Recently, various computerized speech analysis systems have been developed and commonly used in clinical settings to obtain the objective and quantitative data and specific treatment strategies. 10 dysarthric children (6 neurogenic and 4 functional dysarthria) participated in this experiment. Speech evaluation of dysarthria was performed in two ways; first, the acoustic analysis by Visi-Pitch and a Computerized Speech Lab and second, the perceptual scoring of phonetic errors rates in 100 word test. The results of the initial evaluation served as primary guidlines for the indivisualized treatment planning of each patient's speech problems. After mean treatment period of 5 months, the follow-up data of both dysarthric groups showed increased maximum phonation time, increased alternative motion rate and decreased occurrence of articulatory deviation. The changes of acoustic data and therapeutic effects were more prominent in children with dysarthria due to neurologic causes than with functional dysarthria. Three cases including their pre- and post treatment data were illustrated in detail.

  • PDF

Phonetic Functionalism in Coronal/Non-coronal Asymmetry

  • Kim, Sung-A.
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.41-58
    • /
    • 2003
  • Coronal/non-coronal asymmetry refers to the typological trend wherein coronals rather than non-coronals are more likely targets in place assimilation. Although the phenomenon has been accounted for by resorting to the notion of unmarkedness in formalistic approaches to sound patterns, the examination of rules and representations cannot answer why there should be such a process in the first place. Furthermore, the motivation of coronal/non-coronal asymmetry has remained controversial to date even in the field of phonetics. The present study investigated the listeners' perception of coronal and non-coronal stops in the context of $VC_{1}C_{2}V$ after critically reviewing the three types of phonetic accounts for coronal/non-coronal asymmetry, i.e., articulatory, perceptual, and gestural overlap accounts. An experiment was conducted to test whether the phenomenon in question may occur, given the listeners' lack of perceptual ability to identify weaker place cues in VC transitions as argued by Ohala (1990), i.e., coronals have weak place cues that cause listeners' misperception. 5pliced nonsense $VC_{1}C_{2}V$ utterances were given to 20 native speakers of English and Korean. Data analysis showed that majority of the subjects reported $C_{2}\;as\;C_{1}$. More importantly, the place of articulation of C1 did not affect the listeners' identification. Compared to non-coronals, coronals did not show a significantly lower rate of correct identifications. This study challenges the view that coronal/non-coronal asymmetry is attributable to the weak place cues of coronals, providing evidence that CV cues are more perceptually salient than VC cues. While perceptual saliency account may explain the frequent occurrence of regressive assimilation across languages, it cannot be extended to coronal/non-coronal asymmetry.

  • PDF