• Title/Summary/Keyword: English Vowels

Search Result 179, Processing Time 0.028 seconds

A study on English vowel duration with respect to the various characteristics of the following consonant (후행하는 자음의 여러 특성에 따른 영어 모음 길이에 관한 연구)

  • Yoo, Hyunbin;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.1-11
    • /
    • 2022
  • The purpose of this study is to investigate the difference of vowel duration due to the voicing of word-final consonants in English and its relation to the types of word-final consonants (stops vs. fricatives), (partial) devoicing, and stop releasing. Addtionally, this study attempts to interpret the findings from the functional view that the vowels before voiced consonants are produced with a longer duration in order to enhance the salience of the voicing of word-final consonants. This study conducted a recording experiment with English native speakers, and measured the vowel duration, the degree of (partial) devoicing of word-final voiced consonants and the release of word-final stops. First, the results showed that the ratio of the duration difference was not influenced by the types of word-final consonants. Second, it was revealed that the higher the degree of (partial) devoicing of word-final voiced consonants, the longer vowel duration before word-final voiced consonants, which was compatible with the prediction based on the functional view. Lastly, the ratio of the duration difference was greater when the word-final stops were uttered with the release compared to when uttered without the release, which was not consistent with the functional view. These results suggest that it is not sufficient enough to explain the voicing effect by its function of distinguishing the voicing of word-final consonants.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

A study of /l/ velarization in American English based on the Buckeye Corpus (벅아이 코퍼스를 이용한 미국 영어의 /l/ 연구개음화 연구)

  • Sa, Jae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.19-25
    • /
    • 2021
  • It has been widely recognized that there are two varieties of lateral liquid /l/, which are light /l/ (a non-velarized allophone) and dark /l/ (a velarized allophone). However, this categorical view has been challenged in recent studies, both on articulatory and acoustic aspects. The purpose of this study is to investigate whether to consider /l/ velarization as a continuum in American English and provide supporting data. A spontaneous American English speech database called the Buckeye Speech Corpus was used for the material. The formant frequencies of /l/ in each syllable position were measured and analyzed statistically. The formant frequencies of /l/ in each syllable position, especially F2 values, were significantly different from each other. The results showed that there were other significantly different varieties of /l/ in American English, which support the continuum view on /l/ velarization. Regarding the effect of the adjacent vowel, the backness of the adjacent vowels was shown to affect the degree of /l/ velarization, regardless of the syllable position of the lateral liquid. This result will help provide a solid ground for the continuum view.

Lengthening and shortening processes in Korean

  • Kang, Hyunsook;Kim, Tae-kyung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.15-23
    • /
    • 2020
  • This study examines the duration of Korean lax and tense stops in the prosodic word-medial position, their interactions with nearby segments, and the phonological implications of these interactions. It first examines the lengthening of consonants at the function of the short lax stop. Experiment 1 shows that the sonorant C1 is significantly longer before a short lax stop C2 than before a long tense stop. Experiment 2 shows that the short lax stop C1 cancels the contrast between the lax and tense obstruent at C2, making them appear as long tense obstruents (Post-Stop Tensing Rule). We suggest that such lengthening phenomena occur in Korean to robustly preserve the contrastive length difference between C and CC. Second, this study examines the vowel shortening, known as Closed-Syllable Vowel Shortening, before a long tense stop or before the consonant sequence. Experiment 3 suggests that it be interpreted as temporal adjustment to make the interval from the onset of a vowel to the onset of the following vowel of near-equal length. Conclusively, we suggest that Korean speech be planned and controlled with two specific intervals. One is the duration of contrastive consonant intervals between vowels, and the other is the duration from the onset of a vowel to the onset of the following vowel.

An acoustic and perceptual investigation of the vowel length contrast in Korean

  • Lee, Goun;Shin, Dong-Jin
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.37-44
    • /
    • 2016
  • The goal of the current study is to investigate how the sound change is reflected in production or in perception, and what the effect of lexical frequency is on the loss of sound contrasts. Specifically, the current study examined whether the vowel length contrasts are retained in Korean speakers' productions, and whether Korean listeners can distinguish vowel length minimal pairs in their perception. Two production experiments and two perception experiments investigated this. For production tests, twelve Korean native speakers in their 20s and 40s completed a read-aloud task as well as a map-task. The results showed that, regardless of their age group, all Korean speakers produced vowel length contrasts with a small but significant differences in the read-aloud test. Interestingly, the difference between long and short vowels has disappeared in the map task, indicating that the speech mode affects producing vowel length contrasts. For perception tests, thirty-three Korean listeners completed a discrimination and a forced-choice identification test. The results showed that Korean listeners still have a perceptual sensitivity to distinguish lexical meaning of the vowel length minimal pair. We also found that the identification accuracy was affected by the word frequency, showing a higher identification accuracy in high- and mid- frequency words than low frequency words. Taken together, the current study demonstrated that the speech mode (read-aloud vs. spontaneous) affects the production of the sound undergoing a language change; and word frequency affects the sound change in speech perception.

A Phoneme-based Approximate String Searching System for Restricted Korean Character Input Environments (제한된 한글 입력환경을 위한 음소기반 근사 문자열 검색 시스템)

  • Yoon, Tai-Jin;Cho, Hwan-Gue;Chung, Woo-Keun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.10
    • /
    • pp.788-801
    • /
    • 2010
  • Advancing of mobile device is remarkable, so the research on mobile input device is getting more important issue. There are lots of input devices such as keypad, QWERTY keypad, touch and speech recognizer, but they are not as convenient as typical keyboard-based desktop input devices so input strings usually contain many typing errors. These input errors are not trouble with communication among person, but it has very critical problem with searching in database, such as dictionary and address book, we can not obtain correct results. Especially, Hangeul has more than 10,000 different characters because one Hangeul character is made by combination of consonants and vowels, frequency of error is higher than English. Generally, suffix tree is the most widely used data structure to deal with errors of query, but it is not enough for variety errors. In this paper, we propose fast approximate Korean word searching system, which allows variety typing errors. This system includes several algorithms for applying general approximate string searching to Hangeul. And we present profanity filters by using proposed system. This system filters over than 90% of coined profanities.

Place Assimilation in OT

  • Lee, Sechang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.109-116
    • /
    • 1996
  • In this paper, I would like to explore the possibility that the nature of place assimilation can be captured in terms of the OCP within the Optimality Theory (Mccarthy & Prince 1999. 1995; Prince & Smolensky 1993). In derivational models, each assimilatory process would be expressed through a different autosegmental rule. However, what any such model misses is a clear generalization that all of those processes have the effect of avoiding a configuration in which two consonantal place nodes are adjacent across a syllable boundary, as illustrated in (1):(equation omitted) In a derivational model, it is a coincidence that across languages there are changes that have the result of modifying a structure of the form (1a) into the other structure that does not have adjacent consonantal place nodes (1b). OT allows us to express this effect through a constraint given in (2) that forbids adjacent place nodes: (2) OCP(PL): Adjacent place nodes are prohibited. At this point, then, a question arises as to how consonantal and vocalic place nodes are formally distinguished in the output for the purpose of applying the OCP(PL). Besides, the OCP(PL) would affect equally complex onsets and codas as well as coda-onset clusters in languages that have them such as English. To remedy this problem, following Mccarthy (1994), I assume that the canonical markedness constraint is a prohibition defined over no more than two segments, $\alpha$ and $\beta$: that is, $^{*}\{{\alpha, {\;}{\beta{\}$ with appropriate conditions imposed on $\alpha$ and $\beta$. I propose the OCP(PL) again in the following format (3) OCP(PL) (table omitted) $\alpha$ and $\beta$ are the target and the trigger of place assimilation, respectively. The '*' is a reminder that, in this format, constraints specify negative targets or prohibited configurations. Any structure matching the specifications is in violation of this constraint. Now, in correspondence terms, the meaning of the OCP(PL) is this: the constraint is violated if a consonantal place $\alpha$ is immediately followed by a consonantal place $\bebt$ in surface. One advantage of this format is that the OCP(PL) would also be invoked in dealing with place assimilation within complex coda (e.g., sink [si(equation omitted)k]): we can make the constraint scan the consonantal clusters only, excluding any intervening vowels. Finally, the onset clusters typically do not undergo place assimilation. I propose that the onsets be protected by certain constraint which ensures that the coda, not the onset loses the place feature.

  • PDF

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.