• 제목/요약/키워드: phonetic data

검색결과 200건 처리시간 0.018초

모노폰 거리를 이용한 트라이폰 클러스터링 방법 연구 (Efficient Triphone Clustering Using Monophone Distance)

  • 방규섭;육동석
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.41-44
    • /
    • 2006
  • The purpose of state tying is to reduce the number of models and to use relatively reliable output probability distributions. There are two approaches: one is top down clustering and the other is bottom up clustering. For seen data, the performance of bottom up approach is better than that of top down approach. In this paper, we propose a new clustering technique that can enhance the undertrained triphone clustering performance. The basic idea is to tie unreliable triphones before clustering. An unreliable triphone is the one that appears in the training data too infrequently to train the model accurately. We propose to use monophone distance to preprocess these unreliable triphones. It has been shown in a pilot experiment that the proposed method reduces the error rate significantly.

  • PDF

On Reaction Signals

  • Hatanaka, Takami
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.301-311
    • /
    • 2000
  • The purpose of this paper is to explore the use of reaction signals by Japanese and English speakers. After collecting data from Japanese and English speakers, American and British, I checked them and decided to be concerned with five of them: ah, eh, oh, m, and ${\partial}:m$. At first I thought that the first three of them resembled in form and in their tones and meanings, while the others occur frequently only in English. But as I was reading the data more in detail I found the reason for too frequent use of the signal eh by Japanese. It is also found that the signal eh is a kind of substitute for a real word, the similar linguistic phenomenon is seen in the use of m, and m seems to be different from ${\partial}:m$ in its function, according to whether the speaker is talkative or not. And American students learning Japanese started their Japanese with an English reaction signal and the reverse phenomenon was found with Japanese students speaking in English, so much so that reaction signals are used spontaneously, though they have various tones and meanings.

  • PDF

오디오 신호에 기반한 음란 동영상 판별 (Classification of Phornographic Videos Based on the Audio Information)

  • 김봉완;최대림;이용주
    • 대한음성학회지:말소리
    • /
    • 제63호
    • /
    • pp.139-151
    • /
    • 2007
  • As the Internet becomes prevalent in our lives, harmful contents, such as phornographic videos, have been increasing on the Internet, which has become a very serious problem. To prevent such an event, there are many filtering systems mainly based on the keyword-or image-based methods. The main purpose of this paper is to devise a system that classifies pornographic videos based on the audio information. We use the mel-cepstrum modulation energy (MCME) which is a modulation energy calculated on the time trajectory of the mel-frequency cepstral coefficients (MFCC) as well as the MFCC as the feature vector. For the classifier, we use the well-known Gaussian mixture model (GMM). The experimental results showed that the proposed system effectively classified 98.3% of pornographic data and 99.8% of non-pornographic data. We expect the proposed method can be applied to the more accurate classification system which uses both video and audio information.

  • PDF

Eigenspace-based MLLR에 기반한 고속 화자적응 및 환경보상 (Fast Speaker Adaptation and Environment Compensation Based on Eigenspace-based MLLR)

  • 송화전;김형순
    • 대한음성학회지:말소리
    • /
    • 제58호
    • /
    • pp.35-44
    • /
    • 2006
  • Maximum likelihood linear regression (MLLR) adaptation experiences severe performance degradation with very tiny amount of adaptation data. Eigenspace- based MLLR, as an alternative to MLLR for fast speaker adaptation, also has a weak point that it cannot deal with the mismatch between training and testing environments. In this paper, we propose a simultaneous fast speaker and environment adaptation based on eigenspace-based MLLR. We also extend the sub-stream based eigenspace-based MLLR to generalize the eigenspace-based MLLR with bias compensation. A vocabulary-independent word recognition experiment shows the proposed algorithm is superior to eigenspace-based MLLR regardless of the amount of adaptation data in diverse noisy environments. Especially, proposed sub-stream eigenspace-based MLLR with bias compensation yields 67% relative improvement with 10 adaptation words in 10 dB SNR environment, in comparison with the conventional eigenspace-based MLLR.

  • PDF

국어 낭독체 발화의 운율경계 예측 (Prediction of Break Indices in Korean Read Speech)

  • 김효숙;김정원;김선주;김선철;김삼진;권철홍
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.1-9
    • /
    • 2002
  • This study aims to model Korean prosodic phrasing using CART(classification and regression tree) method. Our data are limited to Korean read speech. We used 400 sentences made up of editorials, essays, novels and news scripts. Professional radio actress read 400sentences for about two hours. We used K-ToBI transcription system. For technical reason, original break indices 1,2 are merged into AP. Differ from original K-ToBI, we have three break index Zero, AP and IP. Linguistic information selected for this study is as follows: the number of syllables in ‘Eojeol’, the location of ‘Eojeol’ in sentence and part-of-speech(POS) of adjacent ‘Eojeol’s. We trained CART tree using above information as variables. Average accuracy of predicting NonIP(Zero and AP) and IP was 90.4% in training data and 88.5% in test data. Average prediction accuracy of Zero and AP was 79.7% in training data and 78.7% in test data.

  • PDF

Irregular Pronunciation Detection for Korean Point-of-Interest Data Using Prosodic Word

  • Kim Sun-Hee;Jeon Je-Hun;Na Min-Soo;Chung Min-Hwa
    • 대한음성학회지:말소리
    • /
    • 제57호
    • /
    • pp.123-137
    • /
    • 2006
  • This paper aims to propose a method of detecting irregular pronunciations for Korean POI data adopting the notion of the Prosodic Word based on the Prosodic Phonology (Selkirk 1984, Nespor and Vogel 1986) and Intonational Phonology (Jun 1996). In order to show the performance of the proposed method, the detection experiment was conducted on the 250,000 POI data. When all the data were trained, 99.99% of the exceptional prosodic words were detected, which shows the stability of the system. The results show that similar ratio of exceptional prosodic words (22.4% on average) were detected on each stage where a certain amount of the training data were added. Being intended to be an example of an interdisciplinary study of linguistics and computer science, this study will, on the one hand, provide an understanding of Korean language from the phonological point of view, and, on the other hand, enable a systematic development of a multiple pronunciation lexicon for Korean TTS or ASR systems of high performance.

  • PDF

연속 음성 인식 시스템을 위한 향상된 결정 트리 기반 상태 공유 (Improved Decision Tree-Based State Tying In Continuous Speech Recognition System)

  • 김동화;;;김형순;김영호
    • 한국음향학회지
    • /
    • 제18권6호
    • /
    • pp.49-56
    • /
    • 1999
  • 결정 트리 기반 상태 공유 방법은 HMM을 사용하는 많은 연속 음성 인식 시스템에서 강인하고 정확한 문맥 종속 음향 모델링 뿐만 아니라 훈련 중에는 나타나지 않은 모델들의 합성을 위하여 널리 사용되고 있다. 음성 결정 트리를 구성하기 위한 표준적인 방법은 단일 가우시안 트라이폰 모델을 이용한 1계층 프루닝 만을 사용하고 있다. 본 논문에서는 더욱 정교한 음향 모델링을 통하여 인식 성능 향상을 도모하기 위하여 새로운 2가지 접근 방법 즉, 2계층 결정 트리와 복수 혼합 결정 트리를 제안한다. 2계층 결정 트리는 상태 공유와 혼합 가중치 공유를 위하여 2계층 프루닝을 수행하며, 두 번째 계층을 사용하여 공유 상태들도 음성 문맥의 유사도에 따라서 서로 다른 가중치들을 사용할 수 있다. 두 번째 제안된 방법 에서는 훈련 과정 즉, 혼합 분할 및 재추정 과정과 함께 음성 결정 트리가 계속 갱신되어 진다. 복수 혼합 결정 트리를 구성하기 위하여 단일 가우시안 뿐만 아니라 복수 혼합 가우시안 모델이 함께 사용된다. 제안된 방법들을 이용하여 BN-96과 WSJ5k 데이터를 사용한 연속 음성 인식 실험을 수행한 결과, 표준 결정 트리를 사용한 시스템과 비교하여 공유 상태의 개수를 비슷하게 유지하면서 단어 오인식률을 줄일 수 있었다.

  • PDF

평양 지역어와 서울 지역어의 자음에 대한 음성신호 파라미터들의 비교 연구 - "ㅅ/ ㅆ"을 중심으로 (A Comparative Study of the Speech Signal Parameters for the Consonants of Pyongyang and Seoul Dialects - Focused on "ㅅ/ㅆ")

  • 소신애;이강희;유광복;임하영
    • 예술인문사회 융합 멀티미디어 논문지
    • /
    • 제8권6호
    • /
    • pp.927-937
    • /
    • 2018
  • 본 논문은 공학적 응용의 기초가 되는 신호 처리의 관점에서 한국어의 평양 지역어의 자음과 서울 지역어의 자음에 대한 비교 연구를 수행하였다. 지금까지 대다수의 음성학적 연구는 언어의 진화에서 중요한 역할을 하는 모음을 중심으로 이루어져 왔다. 그러나 어떤 언어든 거의 모든 경우 자음의 수가 모음의 수보다 많다. 따라서 자음에 대한 음성학적 연구 또한 언어 연구에서 중요한 것이다. 본 논문은 음운론적 또는 실험음성학적 방법들로 진행된 평양 지역어의 모음 연구에 더하여 공학적인 방법으로 자음 연구를 수행하였다. 평양 지역어와 서울 지역어에서 음가상 많은 차이를 보이는 치경 자음을 데이터로 하였고 음성신호의 주요한 파라미터들 - 포먼트 주파수, 피치, 스펙트로그램 등 - 을 측정하였다. 한국어 /시/와 /씨/에 대한 두 지역어의 음가를 비교하였다. 이러한 연구는 앞으로 음성 인식과 음성 합성을 위한 기초 자료로 활용될 수 있을 것이다.

조음장애 환아에서 개별화된 치료계획 수립과 효과 판정을 위한 음향음성학적 분석방법의 활용 (Use of Acoustic Analysis for Indivisualised Therapeutic Planning and Assessment of Treatment Effect in the Dysarthric Children)

  • 김연희;유희;신승훈;김현기
    • 음성과학
    • /
    • 제7권2호
    • /
    • pp.19-35
    • /
    • 2000
  • Speech evaluation and treatment planning for the patients with articulation disorders have traditionally been based on perceptual judgement by speech pathologists. Recently, various computerized speech analysis systems have been developed and commonly used in clinical settings to obtain the objective and quantitative data and specific treatment strategies. 10 dysarthric children (6 neurogenic and 4 functional dysarthria) participated in this experiment. Speech evaluation of dysarthria was performed in two ways; first, the acoustic analysis by Visi-Pitch and a Computerized Speech Lab and second, the perceptual scoring of phonetic errors rates in 100 word test. The results of the initial evaluation served as primary guidlines for the indivisualized treatment planning of each patient's speech problems. After mean treatment period of 5 months, the follow-up data of both dysarthric groups showed increased maximum phonation time, increased alternative motion rate and decreased occurrence of articulatory deviation. The changes of acoustic data and therapeutic effects were more prominent in children with dysarthria due to neurologic causes than with functional dysarthria. Three cases including their pre- and post treatment data were illustrated in detail.

  • PDF

Phonetic Functionalism in Coronal/Non-coronal Asymmetry

  • Kim, Sung-A.
    • 음성과학
    • /
    • 제10권1호
    • /
    • pp.41-58
    • /
    • 2003
  • Coronal/non-coronal asymmetry refers to the typological trend wherein coronals rather than non-coronals are more likely targets in place assimilation. Although the phenomenon has been accounted for by resorting to the notion of unmarkedness in formalistic approaches to sound patterns, the examination of rules and representations cannot answer why there should be such a process in the first place. Furthermore, the motivation of coronal/non-coronal asymmetry has remained controversial to date even in the field of phonetics. The present study investigated the listeners' perception of coronal and non-coronal stops in the context of $VC_{1}C_{2}V$ after critically reviewing the three types of phonetic accounts for coronal/non-coronal asymmetry, i.e., articulatory, perceptual, and gestural overlap accounts. An experiment was conducted to test whether the phenomenon in question may occur, given the listeners' lack of perceptual ability to identify weaker place cues in VC transitions as argued by Ohala (1990), i.e., coronals have weak place cues that cause listeners' misperception. 5pliced nonsense $VC_{1}C_{2}V$ utterances were given to 20 native speakers of English and Korean. Data analysis showed that majority of the subjects reported $C_{2}\;as\;C_{1}$. More importantly, the place of articulation of C1 did not affect the listeners' identification. Compared to non-coronals, coronals did not show a significantly lower rate of correct identifications. This study challenges the view that coronal/non-coronal asymmetry is attributable to the weak place cues of coronals, providing evidence that CV cues are more perceptually salient than VC cues. While perceptual saliency account may explain the frequent occurrence of regressive assimilation across languages, it cannot be extended to coronal/non-coronal asymmetry.

  • PDF