• Title/Summary/Keyword: Phonetics

Search Result 948, Processing Time 0.019 seconds

Phonation types of Korean fricatives and affricates

  • Lee, Goun
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.51-57
    • /
    • 2017
  • The current study compared the acoustic features of the two phonation types for Korean fricatives (plain: /s/, fortis : /s'/) and the three types for affricates (aspirated : /$ts^h$/, lenis : /ts/, and fortis : /ts'/) in order to determine the phonetic status of the plain fricative /s/. Considering the different manners of articulation between fricatives and affricates, we examined four acoustic parameters (rise time, intensity, fundamental frequency, and Cepstral Peak Prominence (CPP) values) of the 20 Korean native speakers' productions. The results showed that unlike Korean affricates, F0 cannot distinguish two fricatives, and voice quality (CPP values) only distinguishes phonation types of Korean fricatives and affricates by grouping non-fortis sibilants together. Therefore, based on the similarity found in /$ts^h$/ and /ts/ and the idiosyncratic pattern found in /s/, this research concludes that non-fortis fricative /s/ cannot be categorized as belonging to either phonation type.

The identification of Korean vowels /o/ and /u/ by native English speakers

  • Oh, Eunhae
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.19-24
    • /
    • 2016
  • The Korean high back vowels /o/ and /u/ have been reported to be in a state of near-merger especially among young female speakers. Along with cross-generational changes, the vowel position within a word has been reported to render different phonetic realization. The current study examines native English speakers' ability to attend to the phonetic cues that distinguish the two merging vowels and the positional effects (word-initial vs. word-final) on the identification accuracy. 28 two-syllable words containing /o/ or /u/ in either initial or final position were produced by native female Korean speakers. The CV part of each target word were excised and presented to six native English speakers. The results showed that although the identification accuracy was the lowest for /o/ in word- final position (41%), it increased up to 80% in word-initial position. The acoustic analyses of the target vowels showed that /o/ and /u/ were differentiated on the height dimension only in word-initial position, suggesting that English speakers may have perceived the distinctive F1 difference retained in the prominent position.

Characteristics of the Korean speakers' voice under easy Korean, difficult Korean and English reading situations (한국인의 쉬운 한국어, 어려운 한국어, 영어 읽기 상황에서의 음성 특성)

  • Kim, Ji-Eun
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2016
  • The purpose of this study is to know the acoustic characteristics of voice under stressful and relaxed conditions. Ten undergraduate male students participated in this study and produced 아, 에, 이 vowels in English reading, difficult Korean reading under stressful conditions, and easy Korean reading under relaxed conditions. After that, F0, jitter, shimmer, NHR, F1, F2, and F3 values were measured and analyzed. The results of this study demonstrate that speech parameters related to stress are jitter, shimmer, and NHR in that these values are lower under relaxed situations (easy Korean reading) than that of stressful situations (English and difficult Korean reading). This study will be a foundation to verify that the analysis of acoustic characteristics can serve as a quantitative tool for measuring stress levels.

Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization (효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현)

  • Song, Hwa Jeon;Jung, Ho Young;Park, Jeon Gue
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.

Patterns of consonant deletion in the word-internal onset position: Evidence from spontaneous Seoul Korean speech

  • Kim, Jungsun;Yun, Weonhee;Kang, Ducksoo
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.45-51
    • /
    • 2016
  • This study examined the deletion of onset consonant in the word-internal structure in spontaneous Seoul Korean speech. It used the dataset of speakers in their 20s extracted from the Korean Corpus of Spontaneous Speech (Yun et al., 2015). The proportion of deletion of word-internal onset consonants was analyzed using the linear mixed-effects regression model. The factors that promoted the deletion of onsets were primarily the types of consonants and their phonetic contexts. The results showed that onset deletion was more likely to occur for a lenis velar stop [k] than the other consonants, and in the phonetic contexts, when the preceding vowel was a low central vowel [a]. Moreover, some speakers tended to more frequently delete onset consonants (e.g., [k] and [n]) than other speakers, which reflected individual differences. This study implies that word-internal onsets undergo a process of gradient reduction within individuals' articulatory strategies.

F0 as a primary cue for signaling word-initial stops of Seoul Korean (서울 방언 어두 폐쇄음의 후속모음 F0)

  • Byun, Hi-Gyung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.25-36
    • /
    • 2016
  • Previous studies showed that the voice onset time (VOT) of aspirated and lenis stops has been merged, and post-stop fundamental frequency (F0) has emerged as a primary cue to distinguish the two stops in the younger generation and female speech. The purpose of this study is to demonstrate that VOT merger in aspirated and lenis stops occurs after an F0 difference between the two stops becomes stabilized. In other words, unless post-stop F0, which is a redundant feature, is fully developed, it is hard for VOT merger to happen. Females have got a stable F0 difference in stops earlier than males. Therefore, VOT merger could happen, and as a result, females could take the lead in changing from VOT to F0 in initial stops. This study also shows that speakers who acquired F0 as a primary cue use F0 to the full to distinguish lenis stops from two other stops (aspirated and fortis).

A Study on the Voice Onset Times of the Buckeye Corpus Stops (벅아이 코퍼스 파열음의 성대진동 개시시간 연구)

  • Park, Soo Hee;Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.9-17
    • /
    • 2016
  • The purpose of this work is to examine the voice onset times(VOTs) of the voiceless and voiced stops from the ten young male speakers of the Buckeye corpus[9]. The factors that are known to affect VOTs were also extracted, including the place of articulation, height of following vowels, location within word, presence of a preceding [s], status of the target word with respect to the content versus function word, presence of a syllabic stress, word frequency and speech rate. Findings from this work mostly agreed with those from earlier studies on English, but with some exceptions and new discoveries. We hope that this work can contribute to figuring out the nature and properties of the spontaneous speech of English.

An SVM-based physical fatigue diagnostic model using speech features (음성 특징 파라미터를 이용한 SVM 기반 육체피로도 진단모델)

  • Kim, Tae Hun;Kwon, Chul Hong
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.17-22
    • /
    • 2016
  • This paper devises a model to diagnose physical fatigue using speech features. This paper presents a machine learning method through an SVM algorithm using the various feature parameters. The parameters used include the significant speech parameters, questionnaire responses, and bio-signal parameters obtained before and after the experiment imposing the fatigue. The results showed that performance rates of 95%, 100%, and 90%, respectively, were observed from the proposed model using three types of the parameters relevant to the fatigue. These results suggest that the method proposed in this study can be used as the physical fatigue diagnostic model, and that fatigue can be easily diagnosed by speech technology.

The acoustic realization of the Korean sibilant fricative contrast in Seoul and Daegu

  • Holliday, Jeffrey J.
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.67-74
    • /
    • 2012
  • The neutralization of /$s^h$/ and /$s^*$/ in Gyeongsang dialects is a culturally salient stereotype that has received relatively little attention in the phonetic literature. The current study is a more extensive acoustic comparison of the sibilant fricative productions of Seoul and Gyeongsang dialect speakers. The data presented here suggest that, at least for young Seoul and Daegu speakers, there are few inter-dialectal differences in sibilant fricative production. These conclusions are supported by the output of mixed effects logistic regression models that used aspiration duration, spectral mean of the frication noise, and H1-H2 of the following vowel to predict fricative type in each dialect. The clearest dialect difference was that Daegu speakers' /$s^h$/ and /$s^*$/ productions had overall shorter aspiration durations than those of Seoul speakers, suggesting the opposite of the traditional "/$s^*$/ produced as [$s^h$]" stereotype of Gyeongsang dialects. Further work is needed to investigate whether /$s^h/-/s^*$/ neutralization in Daegu is perceptual rather than acoustic in nature.

Music Recognition Using Audio Fingerprint: A Survey (오디오 Fingerprint를 이용한 음악인식 연구 동향)

  • Lee, Dong-Hyun;Lim, Min-Kyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.77-87
    • /
    • 2012
  • Interest in music recognition has been growing dramatically after NHN and Daum released their mobile applications for music recognition in 2010. Methods in music recognition based on audio analysis fall into two categories: music recognition using audio fingerprint and Query-by-Singing/Humming (QBSH). While music recognition using audio fingerprint receives music as its input, QBSH involves taking a user-hummed melody. In this paper, research trends are described for music recognition using audio fingerprint, focusing on two methods: one based on fingerprint generation using energy difference between consecutive bands and the other based on hash key generation between peak points. Details presented in the representative papers of each method are introduced.