• Title/Summary/Keyword: French vowel

Search Result 11, Processing Time 0.021 seconds

A Comparative Study of Korean and French Vowel Systems -An Experimental Phonetic and Phonological Perspective-

  • Kim, Seon-Jung;Lee, Eun-Yung
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.53-66
    • /
    • 2001
  • This paper aims to investigate the acoustic characteristics of the vowels attested in Korean and French and to seek a way of understanding them from a phonological point of view. We first compare the two vowel systems by measuring the actual frequencies of the formants using CSL. It is shown that the first and second formants vary in wider range in French compared to Korean. In order to understand the two vowel systems from a phonological point of view, we apply the theory of Licensing Constraints, proposed and developed by Kaye (1994), and Charette and Kaye (1994). We propose the licensing constraints placed upon the vowels both in Korean and French. For Korean, we propose the licensing constraints such that both elements I and U must be heads. For French, we claim the following licensing constraints: U in a headed expression must be head, A cannot be head, and Nothing can only license an expression A in it.

  • PDF

The identification of /I/ in Spanish and French

  • Jorge A. Gurlekian;Benoit Jacques;Miguelina Guirao
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.521-528
    • /
    • 1996
  • This presentation explores on the perceptual characteristics of the lateral sound /l/ in CV syllables. At initial position we found that /l/ has well marked formant transitions. Then several questions arise: 1) are these formant structures dependent on the following vowel\ulcorner. 2) Are the formant transitions giving an additional cue for the identification\ulcorner Considering that the French vocalic system presents a greater variety of vowels than Spanish, several experiments were designed to verify to what extent a more extensive range of vocalic timbres contribute to the perception of /l/. Natural emissions of /l/ produced in Argentine Spanish and Canadian French CV syllables were recorded, where V was successively /i, e, a, o, u/ for Spanish and /i, e, $\varepsilon$, a, $\alpha$, o, u, y, \phi$/ for French. For each item, the segment C was maintained and V was replaced by cutting & splicing by each of the remaining vowels without transitions. Results of the identification tests for Spanish show that natural /l/ segments with low Fl and high formants F3, F4 can be clearly identified in the /i, e, u/ vowel contexts without transitions. For French subjects the combination of /l/ with a vowel without transitions reflected correct identifications for its own original vowel context in /e, $\varepsilon$, y, $\phi$/. For both languages, in all these combinations, F1 values remained rather steady along the syllable. In the case of /o, u/ very likely the F2 difference lead to a variety of perceptions of the original /l/. For example in Ilul, French subjects reported some identifications of /l/ as a vowel, mainly /y/. Our observations reinforce the importance of F1 as a relevant cue for /l/, and the incidence of the relative distance between formants frequencies of both components.

  • PDF

Analysis of the typology of errors in French's pronunciation by Arabs and proposition for the phonetic correction: Based on the Younes's research paper (아랍어권 학습자들에 의한 프랑스어 발음 오류의 유형 분류와 개선 방안: Younes의 논문을 중심으로)

  • JUNG, Il-Young
    • Cross-Cultural Studies
    • /
    • v.27
    • /
    • pp.7-29
    • /
    • 2012
  • This study was aimed to analyze - focusing on the thesis of Younes - the pronunciation error occuring mostly for Arabian speakers to learn French pronunciation for Arabians and to suggest the effective study plan to improve such errors and provide the effective studying method. The first part is on how the Arabic and French pronunciation system are distinguished, especially by comparing and analyzing the system of graphemes and phonemes, with which we focused on the fact that Arabian is a language centralized on consonants, while French is a verb-centered language. In the second part, we mainly discussed the cause and the types of errors occurring when Arabic speakers study French pronunciation. As of the category of mistakes, we separated them into consonants and verbs. We assumed the possible method which can be used in learning, focusing on /b/, /v/, /p/, /b/ - in case of non-verbs and consonants - and /y/, /ø/, - in case of verbs - which don't exist in Arabic pronunciation system. One of the troubles the professors in Arabian culture have in teaching French to native learners is how to solve the problem on a phonetic basis regarding speaking and reading ability, which belong to verbal skill, among the critical factors of foreign language education, which are listening, speaking, reading, and writing skills. In fact, the problems occuring in learning foreign language are had by not only Arabian learners but also general groups of people who learn the foreign language, the pronunciation system of which is distinctly distinguished from their mother tongue. The important fact professors should recognize regarding study of pronunciation is that they should encourage the learners to reach the acceptable level in proper communication rather than push them to have the same ability as the native speakers, Even though it cannot be said that the methods suggested in this study have absolute influence in reducing errors when learning French pronunciation system, I hope it can be at least a small help.

A comparative study between French schwa and Korean [i] - An experimental phonetic and phonological perspective -

  • Lee, Eun-Yung;Kim, Seon-Jung
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.171-186
    • /
    • 2000
  • The aim of this paper is to investigate the acoustic characteristics of the French vowel [e] and Korean [i] and to seek a way of understanding them from a phonological point of view. These two vowels have similar distributional properties, i.e. they alternate with zero in some contexts. Therefore, in both languages, they are not found when immediately followed by a nucleus with phonetic content and in word-final positions. We firstly compare the two vowels by measuring the actual frequencies of the formants, pitch and energy using CSL. We also consider whether the realisation of the two vowels is affected by the speed of speech sounds. In order to show that realisation of the two vowels in both languages is not arbitrary, rather predicted, we will introduce the notion of proper government, proposed and developed by Kaye (1987, 1990) and Charette (1991).

  • PDF

Noise Effects on Foreign Language Learning (소음이 외국어 학습에 미치는 영향)

  • Lim, Eun-Su;Kim, Hyun-Gi;Kim, Byung-Sam;Kim, Jong-Kyo
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.197-217
    • /
    • 1999
  • In a noisy class, the acoustic-phonetic features of the teacher and the perceptual features of learners are changed comparison with a quiet environment. Acoustical analyses were carried out on a set of French monosyllables consisting of 17 consonants and three vowel /a, e, i/, produced by 1 male speaker talking in quiet and in 50, 60 and 70 dB SPL of masking noise on headphone. The results of the acoustic analyses showed consistent differences in energy and formant center frequency amplitude of consonants and vowels, $F_1$ frequency of vowel and duration of voiceless stops suggesting the increase of vocal effort. The perceptual experiments in which 18 undergraduate female students learning French served as the subjects, were conducted in quiet and in 50, 60 dB of masking noise. The identification scores on consonants were higher in Lombard speech than in normal speech, suggesting that the speaker's vocal effort is useful to overcome the masking effect of noise. And, with increased noise level, the perceptual response to the French consonants given had a tendency to be complex and the subjective reaction score on the noise using the vocabulary representative of 'unpleasant' sensation to be higher. And, in the point of view on the L2(second language) acquisition, the influence of L1 (first language) on L2 examined in the perceptual result supports the interference theory.

  • PDF

Target F2 Values of Coronal Stops in Korean, English, and. French (설단 폐쇄음의 목표 F2 값: 한국어, 영어, 불어의 비교)

  • Oh, Eun-Jin
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.81-91
    • /
    • 2003
  • The aim of this study was to estimate the target F2 values of the coronal plain stop in Korean and the degree of deviation from the target in the context of various vowels, and to compare the results of Korean regarding the coronal stop with those of English and French. An acoustic analysis showed that the mean F2 value of the Korean coronal stop produced by 10 male speakers was 1,855 Hz and the deviation from the target was 94 Hz in the context of [i], 204 Hz in the context of [u], and 407 Hz in the context of [o]. The target F2s of the coronal stop were the highest in English (1,929 Hz) and the lowest in French (1,662 Hz), and the deviation from the targets in the context of the high back vowel was the largest in French (257 Hz) and the smallest in English (73 Hz).

  • PDF

Electromyographic evidence for a gestural-overlap analysis of vowel devoicing in Korean

  • Jun, Sun-A;Beckman, M.;Niimi, Seiji;Tiede, Mark
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.153-200
    • /
    • 1997
  • In languages such as Japanese, it is very common to observe that short peripheral vowel are completely voiceless when surrounded by voiceless consonants. This phenomenon has been known as Montreal French, Shanghai Chinese, Greek, and Korean. Traditionally this phenomenon has been described as a phonological rule that either categorically deletes the vowel or changes the [+voice] feature of the vowel to [-voice]. This analysis was supported by Sawashima (1971) and Hirose (1971)'s observation that there are two distinct EMG patterns for voiced and devoiced vowel in Japanese. Close examination of the phonetic evidence based on acoustic data, however, shows that these phonological characterizations are not tenable (Jun & Beckman 1993, 1994). In this paper, we examined the vowel devoicing phenomenon in Korean using data from ENG fiberscopic and acoustic recorders of 100 sentences produced by one Korean speaker. The results show that there is variability in the 'degree of devoicing' in both acoustic and EMG signals, and in the patterns of glottal closing and opening across different devoiced tokens. There seems to be no categorical difference between devoiced and voiced tokens, for either EMG activity events or glottal patterns. All of these observations support the notion that vowel devoicing in Korean can not be described as the result of the application of a phonological rule. Rather, devoicing seems to be a highly variable 'phonetic' process, a more or less subtle variation in the specification of such phonetic metrics as degree and timing of glottal opening, or of associated subglottal pressure or intra-oral airflow associated with concurrent tone and stricture specifications. Some of token-pair comparisons are amenable to an explanation in terms of gestural overlap and undershoot. However, the effect of gestural timing on vocal fold state seems to be a highly nonlinear function of the interaction among specifications for the relative timing of glottal adduction and abduction gestures, of the amplitudes of the overlapped gestures, of aerodynamic conditions created by concurrent oral tonal gestures, and so on. In summary, to understand devoicing, it will be necessary to examine its effect on phonetic representation of events in many parts of the vocal tracts, and at many stages of the speech chain between the motor intent and the acoustic signal that reaches the hearer's ear.

  • PDF

Acoustic Realization of Metrical Structure in Orally Produced Korean Modern Poetry (한국 현대시 운율의 음향 발현)

  • Kim, Hyun-Gi;Hong, Ki-Hwan;Kim, Sun-Sook
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.181-192
    • /
    • 2004
  • The metrical structures in orally produced the poetry were generally analyzed by accent, metre and syllable. The purpose of this study is to investigate of metrical structures of Korean modem poetry using computer implemented speech analysis system. Two famous poet's poems confidential talk, Miloe and 'A buddhist dance, Sungmu' were selected for prosodic analysis. The informant is 60 years old professor in major of Korean and French poetry. The syllable structures of poems were analyzed primarily by vowel timbers, which can classified compact and diffuse vowels according to the distance of F2-F1. The perception cues of consonants were analyzed by VOT and tensity features of articulation. Rhythm is classified by dactyl, anapest, trochee, spondee and iambic. As a result, syllable structures of Korean modem poetry were mainly CV and CVC and the reading times of each lines were 3-4sec for 12 and 15 syllables. Main metre of Korean modem poems constructed the Imbic and Anapest. The break of each lines were demarcated by grammatical structure or meaning rather than phonetic structures.

  • PDF

Analyzing vowel variation in Korean dialects using phone recognition

  • Jooyoung Lee;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.101-107
    • /
    • 2023
  • This study aims to propose an automatic method of detecting vowel variation in the Korean dialects of Gyeong-sang and Jeol-la. The method is based on error patterns extracted using phone recognition. Canonical and recognized phone sequences are compared, and statistical analyses distinguish the vowels appearing in both dialects, the dialect-common vowels, and the vowels with high mismatch rates for each dialect. The dialect-common vowels show monophthongization of diphthongs. The vowels unique to the dialects are /we/ to [e] and /ʌ/ to [ɰ] for Gyeong-sang dialect, and /ɰi/ to [ɯ] in Jeol-la dialect. These results corroborate previous dialectology reports regarding phonetic realization of the Korean dialects. The current method provides a possibility of automatic explanation of the dialect patterns.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.