• Title/Summary/Keyword: phonemic

Search Result 96, Processing Time 0.023 seconds

The Effect of Visual Cues in the Identification of the English Consonants /b/ and /v/ by Native Korean Speakers (한국어 화자의 영어 양순음 /b/와 순치음 /v/ 식별에서 시각 단서의 효과)

  • Kim, Yoon-Hyun;Koh, Sung-Ryong;Valerie, Hazan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.25-30
    • /
    • 2012
  • This study investigated whether native Korean listeners could use visual cues for the identification of the English consonants /b/ and /v/. Both auditory and audiovisual tokens of word minimal pairs in which the target phonemes were located in word-initial or word-medial position were used. Participants were instructed to decide which consonant they heard in $2{\times}2$ conditions: cue (audio-only, audiovisual) and location (word-initial, word-medial). Mean identification scores were significantly higher for audiovisual than audio-only condition and for word-initial than word-medial condition. Also, according to signal detection theory, sensitivity, d', and response bias, c were calculated based on both hit rates and false alarm rates. The measures showed that the higher identification rate in the audiovisual condition was related with an increase in sensitivity. There were no significant differences in response bias measures across conditions. This result suggests that native Korean speakers can use visual cues while identifying confusing non-native phonemic contrasts. Visual cues can enhance non-native speech perception.

Reduction and Frequency Analyses of Vowels and Consonants in the Buckeye Speech Corpus

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.75-83
    • /
    • 2012
  • The aims of this study were three. First, to examine the degree of deviation from dictionary prescribed symbols and actual speech made by American English speakers. Second, to measure the frequency of vowel and consonant production of American English speakers. And third, to investigate gender differences in the segmental sounds in a speech corpus. The Buckeye Speech Corpus was recorded by forty American male and female subjects for one hour per subject. The vowels and consonants in both the phonemic and phonetic transcriptions were extracted from the original files of the corpus and their frequencies were obtained using codes of a free software R. Results were as follows: Firstly, the American English speakers produced a reduced number of vowels and consonants in daily conversation. The reduction rate from the dictionary transcriptions to the actual transcriptions was around 38.2%. Secondly, the American English speakers used more front high and back low vowels while three-fourths of the consonants accounted for stops, fricatives, and nasals. This indicates that the segmental inventory has nonlinear frequency distribution in the speech corpus. Thirdly, the two gender groups produced vowels and consonants similarly even though there were a few noticeable differences in their speech. From these results we propose that English teachers consider pronunciation education reflecting the actual speech sounds and that linguists find a way to establish unmarked segmentals from speech corpora.

Development of Automatic Lip-sync MAYA Plug-in for 3D Characters (3D 캐릭터에서의 자동 립싱크 MAYA 플러그인 개발)

  • Lee, Sang-Woo;Shin, Sung-Wook;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.127-134
    • /
    • 2018
  • In this paper, we have developed the Auto Lip-Sync Maya plug-in for extracting Korean phonemes from voice data and text information based on Korean and produce high quality 3D lip-sync animation using divided phonemes. In the developed system, phoneme separation was classified into 8 vowels and 13 consonants used in Korean, referring to 49 phonemes provided by Microsoft Speech API engine SAPI. In addition, the pronunciation of vowels and consonants has variety Mouth Shapes, but the same Viseme can be applied to some identical ones. Based on this, we have developed Auto Lip-sync Maya Plug-in based on Python to enable lip-sync animation to be implemented automatically at once.

Categorization and production in lexical pitch accent contrasts of North Kyungsang Korean

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.1-7
    • /
    • 2018
  • Categorical production in language processing helps speakers to produce phonemic contrasts. This categorization and production is utilized for the production-based and imitation-based approach in the present study. Contrastive signals in speakers' speech reflect the shapes of boundaries with categorical characteristics. Signals that provide information about lexical pitch accent contrasts can introduce categorical distinctions for productive and cognitive selection. This experiment was conducted with nine North Kyungsang speakers for a production task and nine North Kyungsang speakers for an imitation task. The first finding of the present study is the rigidity of categorical production, which controls the boundaries of lexical pitch accent contrasts. The categorization of North Kyungsang speakers' production allows them to classify minimal pitch accent contrasts. The categorical production in imitation appeared in two clusters, representing two meaningful contrasts. The second finding of the present study is that there are individual differences in speakers' production and imitation responses. The distinctive performances of individual speakers showed a variety of curves. For the HL-LH patterns, the categorical production tended to be highly distinctive as compared to the other pitch accent patterns (HH-HL and HH-LH), showing that there are more continuous curves than categorical curves. Finally, the present study shows that, for North Kyungsang speakers, imitative production is the core type of categorical production for determining the existence of the lexical pitch accent system. However, several questions remain for defining that categorical production, which leads to ideas for future research.

The Speech Characteristics of Korean Dysarthria: An Experimental Study with the Use of a Phonetic Contrast Intelligibility Test (음소대조 검사방법을 이용한 마비말장애인의 말소리 명료도 특성)

  • Kim Soo Jin;Kim Young Tae;Kim Gi Na
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1E
    • /
    • pp.28-33
    • /
    • 2005
  • This study was designed to suggest an assessment tool for analyzing the characteristics of Korean phonetic contrast intelligibility among dysarthric individuals. The intelligibility deficit factors of phonetic contrast in Korean dysarthric patients were analyzed through stepwise regression analysis. The 19 acoustic-phonetic contrasts proposed by Kent et al. (1999) have been claimed to be useful for clinical assessment and research on dysarthria. However, the test cannot be directly applied to Korean patients due to linguistic differences between English and Korean. Thus, it is necessary to devise a Korean word intelligibility test that reflects the distinct characteristics of the Korean language. To identify the speech error characteristics of a Korean dysarthric group, a Korean word list was audio-recorded by 3 spastic, 4 flaccid, and 5 mixed type of dysarthric patients. The word list consisted of monosyllabic consonant-vowel-consonant (CVC) real word pairs. Stimulus words included 41 phonemic contrast pairs and six triplets. The results showed that the percentage of errors in final position contrast was higher than in any other position. Unlike the results of previous studies, the initial-position contrasts were crucial in predicting the overall intelligibility among Korean patients.

Adaptive Changes in the Grain-size of Word Recognition (단어재인에 있어서 처리단위의 적응적 변화)

  • Lee, Chang H.
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2002.05a
    • /
    • pp.111-116
    • /
    • 2002
  • The regularity effect for printed word recognition and naming depends on ambiguities between single letters (small grain-size) and their phonemic values. As a given word is repeated and becomes more familiar, letter-aggregate size (grain-size) is predicted to increase, thereby decreasing the ambiguity between spelling pattern and phonological representation and, therefore, decreasing the regularity effect. Lexical decision and naming tasks studied the effect of repetition on the regularity effect for words. The familiarity of a word from was manipulated by presenting low and high frequency words as well as by presenting half the stimuli in mixed upper- and lowercase letters (an unfamiliar form) and half in uniform case. In lexical decision, the regularity effect was initially strong for low frequency words but became null after two presentations; in naming it was also initially strong but was merely reduced (although still substantial) after three repetitions. Mixed case words were recognized and named more slowly and tended to show stronger regularity effects. The results were consistent with the primary hypothesis that familiar word forms are read faster because they are processed at a larger grain-size, which requires fewer operations to achieve lexical selection. Results are discussed in terms of a neurobiological model of word recognition based on brain imaging studies.

  • PDF

A Study on the Generation of Multi-syllable Nonsense Wordset for the Assessment of Synthetic Speech (합성음성평가를 위한 다음절 무의미단어 생성과 이용에 관한 연구)

  • Jo, Cheol-Woo;Kim, Kyung-Tae;Lee, Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.51-58
    • /
    • 1994
  • These times many kinds of man-machine Interfaces using speech signal, speech recognizers or speech synthesizers, are proposed and utilized in practice. Especially speech synthesis system is widely used in our life. But its assessment method is still in its first stage. In this paper we propose a method to generate multi-syllable nonsense wordset for the purpose of synthetic speech assessment and applies the wordset to one commercial text-to-speech system. Some results about the experiment is suggested and it is verified that the method to generate a nonsense wordset can be used to assess the intelligibility of the synthesizer in phoneme level or in phonemic environmental level.

  • PDF

What can be learned from borrowed words\ulcorner -The case of Japanese language borrowing words ending with a closed syllables-

  • Claude Roberge;Norico Hoki
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.245-245
    • /
    • 1996
  • When language A borrows words, it borrows them according to its own phonetic rules. In other words, language B, where borrowed words coming from, has to comply with the phonetic requirements of language A. It may be added that language A only borrows the elements, the types of syllables and accentuation that already exist in its own phonetic struture and rejects all the rest that are not compatible. It operates exactly like a sieve. That is why borrowed words offer an excellent observation post to notice how react in phonetic contexts. The Japanese language has borrowed and is borrowing extensively from other languages and cultures, mainly from the English ones in the fields of sports, medicine, industry, commerce, and natural sciences. Relatively very few new words are created using the ancient Chinese or native backgrounds. This presentation will look for the rules of borrowing and try to show that this way of borrowing represents an organized system of its own. Three levels would be particularly studied : - the phonemic level - the syllable level and - the accentual level. This last point would be specially targeted with the question of syllable tension-relaxation. Such a study of languages in phonetics contacts could shed some new light on the phonetic charaCteristics of Japanese language and will confirm or weaken some conclusion already demonstated otherwise. We will be aming specially at the endings of the borrowed words where, it seems, Japanese language manifests itself very strongly.

  • PDF

Variation in vowel duration depending on voicing in American, British, and New Zealand English

  • Cho, Hyesun
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.11-20
    • /
    • 2016
  • It is well known that vowels are shorter before voiceless consonants than voiced ones in English, as in many other languages. Research has shown that the ratio of vowel durations in voiced and voiceless contexts in English is in the range of 0.6~0.8. However, little work has been done as to whether the ratio of vowel durations varies depending on English variety. In the production experiment in this paper, seven speakers from three varieties of English, New Zealand, British, and American English, read 30 pairs of (C)VC monosyllabic words which differ in coda voicing (e.g. beat-bead). Vowel height, phonemic vowel length, and consonant manner were varied as well. As expected, vowel-shortening effects were found in all varieties: vowels were shorter before voiceless than before voiced codas. Overall vowel duration was the longest in American English and the shortest in New Zealand (NZ) English. In particular, vowel duration before voiceless codas is the shortest in New Zealand English, indicating the most radical degree of shortening in this variety. As a result, the ratio of vowel durations in varying voicing contexts is the lowest in NZ English, while American and British English do not show a significant difference each other. In addition, consonant closure duration was examined. Whereas NZ speakers show the shortest vowel duration before a voiceless coda, their voiceless consonants have the longest closure duration, which suggest an inverse relationship between vowel duration and closure duration.

Speech processing strategy and executive function: Korean children's stop perception

  • Kong, Eun Jong;Yoo, Jeewon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.57-65
    • /
    • 2017
  • The current study explored how Korean-speaking children processed the multiple acoustic cues (VOT and f0) for the stop laryngeal contrast (/t'/, /t/, and /$t^h$/) and examined whether individual perceptual strategies could be related to a general cognitive ability performing executive functions (EF). 15 children (aged from 7 to 8) participated in the speech perception task identifying the three Korean laryngeal stops (3AFC) on listening to the auditory stimuli of C-/a/ with synthetically varying VOT and f0. They completed a series of EF tasks to measure working memory, inhibition, and cognitive shifting ability. The findings showed that children used the two cues in a highly correlated manner. While children utilized VOT consistently for the three laryngeal categories, their use of f0 was either reduced or enhanced depending on the phonetic categories. Importantly, the children's processing strategies of a f0 suppression for a tense-aspirated contrast were meaningfully associated with children's better cognitive abilities such as working memory, inhibition, and attentional shifting. As a preliminary experimental investigation, the current research demonstrated that listeners with inefficient processing strategies were poor at the EF skills, suggesting that cognitive skills might be responsible for developmental variations of processing sub-phonemic information for the linguistic contrast.