• Title/Summary/Keyword: Speech sound

Search Result 628, Processing Time 0.027 seconds

Korean first graders' word decoding skills, phonological awareness, rapid automatized naming, and letter knowledge with/without developmental dyslexia (초등 1학년 발달성 난독 아동의 낱말 해독, 음운인식, 빠른 이름대기, 자소 지식)

  • Yang, Yuna;Pae, Soyeong
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.51-60
    • /
    • 2018
  • This study aims to compare the word decoding skills, phonological awareness (PA), rapid automatized naming (RAN) skills, and letter knowledge of first graders with developmental dyslexia (DD) and those who were typically developing (TD). Eighteen children with DD and eighteen TD children, matched by nonverbal intelligence and discourse ability, participated in the study. Word decoding of Korean language-based reading assessment(Pae et al., 2015) was conducted. Phoneme-grapheme correspondent words were analyzed according to whether the word has meaning, whether the syllable has a final consonant, and the position of the grapheme in the syllable. Letter knowledge asked about the names and sounds of 12 consonants and 6 vowels. The children's PA of word, syllable, body-coda, and phoneme blending was tested. Object and letter RAN was measured in seconds. The decoding difficulty of non-words was more noticeable in the DD group than in the TD one. The TD children read the syllable initial and syllable final position with 99% correctness. Children with DD read with 80% and 82% correctness, respectively. In addition, the DD group had more difficulty in decoding words with two patchims when compared with the TD one. The DD group read only 57% of words with two patchims correctly, while the TD one read 91% correctly. There were significant differences in body-coda PA, phoneme level PA, letter RAN, object RAN, and letter-sound knowledge between the two groups. This study confirms the existence of Korean developmental dyslexics, and the urgent need for the inclusion of a Korean-specific phonics approach in the education system.

Intonation Training System (Visual Analysis Tool) and the application of French Intonation for Korean Learners (컴퓨터를 이용한 억양 교육 프로그램 개발 : 프랑스어 억양 교육을 중심으로)

  • Yu, Chang-Kyu;Son, Mi-Ra;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1999
  • This study is concerned with the educational program Visual Analysis Tool (VAT) for sound development for foreign intonation using personal computer. The VAT can run on IBM-PC 386 compatible or higher. It shows the spectrogram, waveform, intensity and the pitch contour. The system can work freely on either waveform zoom in-out or the documentation of measured value. In this paper, intensity and pitch contour information were used. Twelve French sentences were recorded from a French conversational tape. And three Korean participated in this study. They spoke out twelve sentences repeatly and trid to make the same pitch contour - by visually matching their pitcgh contour to the native speaker's. A sentences were recorded again when the participants themselves became familiar with intonation, intensity and pauses. The difference of pitch contour(rising or falling), pitch value, energy, total duration of sentences and the boundary of rhythmic group between native speaker's and theirs before and after training were compared. The results were as following: 1) In a declarative sentence: a native speaker's general pitch contour falls at the end of sentences. But the participant's pitch contours were flat before training. 2) In an interrogative: the native speaker made his pitch contours it rise at the end of sentences with the exception of wh-questions (qu'est-ce que) and a pitch value varied a greath. In the interrogative 'S + V' form sentences, we found the pitch contour rose higher in comparison to other sentences and it varied a great deal. 3) In an exclamatory sentence: the pitch contour looked like a shape of a mountain. But the participants could not make it fall before or after training.

  • PDF

Phonetic Functionalism in Coronal/Non-coronal Asymmetry

  • Kim, Sung-A.
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.41-58
    • /
    • 2003
  • Coronal/non-coronal asymmetry refers to the typological trend wherein coronals rather than non-coronals are more likely targets in place assimilation. Although the phenomenon has been accounted for by resorting to the notion of unmarkedness in formalistic approaches to sound patterns, the examination of rules and representations cannot answer why there should be such a process in the first place. Furthermore, the motivation of coronal/non-coronal asymmetry has remained controversial to date even in the field of phonetics. The present study investigated the listeners' perception of coronal and non-coronal stops in the context of $VC_{1}C_{2}V$ after critically reviewing the three types of phonetic accounts for coronal/non-coronal asymmetry, i.e., articulatory, perceptual, and gestural overlap accounts. An experiment was conducted to test whether the phenomenon in question may occur, given the listeners' lack of perceptual ability to identify weaker place cues in VC transitions as argued by Ohala (1990), i.e., coronals have weak place cues that cause listeners' misperception. 5pliced nonsense $VC_{1}C_{2}V$ utterances were given to 20 native speakers of English and Korean. Data analysis showed that majority of the subjects reported $C_{2}\;as\;C_{1}$. More importantly, the place of articulation of C1 did not affect the listeners' identification. Compared to non-coronals, coronals did not show a significantly lower rate of correct identifications. This study challenges the view that coronal/non-coronal asymmetry is attributable to the weak place cues of coronals, providing evidence that CV cues are more perceptually salient than VC cues. While perceptual saliency account may explain the frequent occurrence of regressive assimilation across languages, it cannot be extended to coronal/non-coronal asymmetry.

  • PDF

Adaptation Mode Controller for Adaptive Microphone Array System (마이크로폰 어레이를 위한 적응 모드 컨트롤러)

  • Jung Yang-Won;Kang Hong-Goo;Lee Chungyong;Hwang Youngsoo;Youn Dae Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1573-1580
    • /
    • 2004
  • In this paper, an adaptation mode controller for adaptive microphone array system is proposed for high-quality speech acquisition in real environments. To ensure proper adaptation of the adaptive array algorithm, the proposed adaptation mode controller uses not only temporal information, but also spatial information. The proposed adaptation mode controller is constructed with two processing stages: an initialization stage and a running stage. In the initialization stage, a sound source localization technique is adopted, and a signal correlation characteristic is used in the running stage. For the adaptive may algorithm, a generalized sidelobe canceller with an adaptive blocking matrix is used. The proposed adaptation mode controller can be used even when the adaptive blocking matrix is not adapted, and is much stable than the power ratio method. The proposed algorithm is evaluated in real environment, and simulation results show 13dB SINR improvement with the speaker sitting 2m distance from the may.

스웨덴어 발음 교육상의 몇 가지 문제점 - 모음을 중심으로 -

  • Byeon Gwang-Su
    • MALSORI
    • /
    • no.4
    • /
    • pp.20-30
    • /
    • 1982
  • The aim of this paper is to analyse difficulties of the pronunciation in swedish vowels encountered by Koreans learners and to seek solutions in order to correct the possible errors. In the course of the analysis the swedish and Korean vowels in question are compared with the purpose of describing differences aha similarities between these two systems. This contrastive description is largely based on the students' articulatory speech level ana the writer's auditory , judgement . The following points are discussed : 1 ) Vowel length as a distinctive feature in Swedish compared with that of Korean. 2) A special attention is paid on the Swedish vowel [w:] that is characterized by its peculiar type of lip rounding. 3) The six pairs of Swedish vowels that are phonologically contrastive but difficult for Koreans to distinguish one from the other: [y:] ~ [w:], [i:] ~ [y:], [e:] ~ [${\phi}$:], [w;] ~ [u:] [w:] ~ [$\theta$], [$\theta$] ~ [u] 4) The r-colored vowel in the case of the postvocalic /r/ that is very common in American English is not allowed in English sound sequences. The r-colored vowel in the American English pattern has to be broken up and replaced hi-segmental vowel-consonant sequences . Korean accustomed to the American pronunciation are warned in this respect. For a more distinct articulation of the postvocalic /r/ trill [r] is preferred to fricative [z]. 5) The front vowels [e, $\varepsilon, {\;}{\phi}$) become opener variants (${\ae}, {\;}:{\ae}$] before / r / or supradentals. The results of the analysis show that difficulties of the pronunciation of the target language (Swedish) are mostly due to the interference from the Learner's source language (Korean). However, the Learner sometimes tends to get interference also from the other foreign language with which he or she is already familiar when he or she finds in that language more similarity to the target language than in his or her own mother tongue. Hence this foreign language (American English) in this case functions as a second language for Koreans in Learning Swedish.

  • PDF

A preliminary study on standardization of phoneme perception test for school-aged children : Focused on hearing impaired children (학령기용 음소지각검사 표준화를 위한 기초연구: 청각장애아동을 대상으로)

  • Shin, Eun-Yeong;Cho, Soo-Jin;Lee, HyoIn
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.99-107
    • /
    • 2022
  • This study attempted to analyze the consonant perception ability and errors and to verify compatibility items for hearing impaired children wearing hearing aids and cochlear implants using the Phoneme Perception Test for School-Aged children (PPT-S). As a result of the study, it was found that children with hearing impairments have more difficulty in perceiving final consonants than initial consonants. The hard type of PPT-S, in which the articulation method and articulation place of the target and foil words are similar, felt more difficult than the easy type. Among the initial consonants, the incorrect response rate for aspiration sound was higher. In the case of final consonants, the incorrect answer rate for 'ㄷ' and 'ㅁ' was relatively higher. There was no significant difference in the percentage of correct response rate according to the gender of the speaker. The above results can be usefully used as basic data for standardizing of PPT-S and evaluating the intervention effects before and after hearing rehabilitation with hearing impaired children.

A Very Low-Bit-Rate Analysis-by-Synthesis Speech Coder Using Zinc Function Excitation (Zinc 함수 여기신호를 이용한 분석-합성 구조의 초 저속 음성 부호화기)

  • Seo Sang-Won;Kim Jong-Hak;Lee Chang-Hwan;Jeong Gyu-Hyeok;Lee In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.282-290
    • /
    • 2006
  • This paper proposes a new Digital Reverberator that models Analog Helical Coil Spring Reverberator for guitar amplifiers. While the conventional digital reverberators are proposed to provide better sound field mainly based on room acoustics, no algorithm or analysis of digital reverberators those model Helical Coil Spring Reverberator was proposed. Considering the fact that approximately $70{\sim}80$ percent of guitar amplifiers are still with Helical Coil Spring Reverberator, research was performed based not on Room Acoustics but on Helical Coil Spring Reverberator itself as an effector. After performing simulations with proposed algorithm, it was confirmed that the Digital Reverberator by proposed algorithm provides perceptually equivalent response to the conventional Analog Helical Coil Spring Reverberators.

Automatic Indexing Algorithm of Golf Video Using Audio Information (오디오 정보를 이용한 골프 동영상 자동 색인 알고리즘)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.441-446
    • /
    • 2009
  • This paper proposes an automatic indexing algorithm of golf video using audio information. In the proposed algorithm, the input audio stream is demultiplexed into the stream of video and audio. By means of Adaboost-cascade classifier, the continuous audio stream is classified into announcer's speech segment recorded in studio, music segment accompanied with players' names on TV screen, reaction segment of audience according to the play, reporter's speech segment with field background, filed noise segment like wind or waves. And golf swing sound including drive shot, iron shot, and putting shot is detected by the method of impulse onset detection and modulation spectrum verification. The detected swing and applause are used effectively to index action or highlight unit. Compared with video based semantic analysis, main advantage of the proposed system is its small computation requirement so that it facilitates to apply the technology to embedded consumer electronic devices for fast browsing.

An Aerodynamic study used aerophone II for snoring patients (코콜이 환자의 sleep splint 착용 전후의 음향학적 및 공기역학적 연구)

  • Jung, Se-Jin;Kim, Hyun-Gi;Shin, Hyo-Keun
    • The Journal of the Korean dental association
    • /
    • v.49 no.4
    • /
    • pp.219-226
    • /
    • 2011
  • Snoring and obstructive sleep apnea (OSA) are common sleep disordered breathing conditions. Habitual snoring is caused by a vibration of soft tissue of upper airway while breath in sleeping, and obstructive sleep apnea is caused by the repeated obstructions of airflow for a sleeping, specially airflow of pharynx. Researchers have shown that snoring is the most important symptom connected with the obstructive sleep apnea syndrome The treatment is directed toward improving the air flow by various surgical and nonsurgical methods. The current surgical procedures used are uvulopalatopharyngoplasty(UPPP), orthognathic surgery, nasal cavity surgery. Among the nonsurgical methods there are nasal continuous positive air pressure(CPAP), pharmacologic therapy. weight loss in obese patient, oral appliance(sleep splint). Sleep splint brings the mandible forward in order to increase upper airway volume and prevents total upper airway collapse during sleep. However, the precise mechanism of action is not yet completely understood, especially aerodynamic factor. The aim of this study evaluated the effect of conservative treatment of snoring and OSAS by sleep splint through measured aerodynamic change by an aerophone II. We measured a airflow, sound pressure level, duration, mean power from overall airflow by aerophone II mask. The results indicated that on a positive correlation between a decrease in maximum airflow rate and a decrease in maximum sound pressure level, on a negative correlation between a decrease in maximum airflow rate and a increase in duration.

A Study on the Spoken KOrean-Digit Recognition Using the Neural Netwok (神經網을 利用한 韓國語 數字音 認識에 관한 硏究)

  • Park, Hyun-Hwa;Gahang, Hae Dong;Bae, Keun Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.5-13
    • /
    • 1992
  • Taking devantage of the property that Korean digit is a mono-syllable word, we proposed a spoken Korean-digit recognition scheme using the multi-layer perceptron. The spoken Korean-digit is divided into three segments (initial sound, medial vowel, and final consonant) based on the voice starting / ending points and a peak point in the middle of vowel sound. The feature vectors such as cepstrum, reflection coefficients, ${\Delta}$cepstrum and ${\Delta}$energy are extracted from each segment. It has been shown that cepstrum, as an input vector to the neural network, gives higher recognition rate than reflection coefficients. Regression coefficients of cepstrum did not affect as much as we expected on the recognition rate. That is because, it is believed, we extracted features from the selected stationary segments of the input speech signal. With 150 ceptral coefficients obtained from each spoken digit, we achieved correct recognition rate of 97.8%.

  • PDF