• Title/Summary/Keyword: Speech level

Search Result 678, Processing Time 0.023 seconds

Early Vocalization and Phonological Developments of Typically Developing Children: A longitudinal study (일반 영유아의 초기 발성과 음운 발달에 관한 종단 연구)

  • Ha, Seunghee;Park, Bora
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.63-73
    • /
    • 2015
  • This study investigated longitudinally early vocalization and phonological developments of typically developing children. Ten typically developing children participated in the study from 9 months to 18 months of age. Spontaneous utterance samples were collected at 9, 12, 15, 18 months of age and phonetically transcribed and analyzed. Utterance samples were classified into 5 levels using Stark Assessment of Early Vocal Development-Revised(SAEVD-R). The data analysis focused on 4 and 5 levels of vocalizations classified by SAEVD-R and word productions. The percentage of each vocalization level, vocalization length, syllable structures, and consonant inventory were obtained. The results showed that the percentages of level 4 and 5 vocalizations and word significantly increased with age and the production of syllable structures containing consonants significantly increased around 12 and 15 months of age. On average, the children produced 4 types of syllable structure and 5.4 consonants at 9 months and they produced 5 types of syllable structure and 9.8 consonants at 18 months. The phonological development patterns in this study were consistent with those analyzed from children's meaningful utterances in previous studies. The results support the perspective on the continuity between babbling and early speech. This study has clinical implications in early identification and speech-language intervention for young children with speech delays or at risk.

Cyber Character Implementation with Recognition and Synthesis of Speech/lmage (음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현)

  • Choe, Gwang-Pyo;Lee, Du-Seong;Hong, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • In this paper, we implemented cyber character that can do speech recognition, speech synthesis, Motion tracking and 3D animation. For speech recognition, we used Discrete-HMM algorithm with K-means 128 level vector quantization and MFCC feature vector. For speech synthesis, we used demi-syllables TD-PSOLA algorithm. For PC based Motion tracking, we present Fast Optical Flow like Method. And for animating 3D model, we used vertex interpolation with DirectSD retained mode. Finally, we implemented cyber character integrated above systems, which game calculating by the multiplication table with user and the cyber character always look at user using of Motion tracking system.

  • PDF

An Improvement of Speech Hearing Ability for sensorineural impaired listners (감음성(感音性) 난청인의 언어청력 향상에 관한 연구)

  • Lee, S.M.;Woo, H.C.;Kim, D.W.;Song, C.G.;Lee, Y.M.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.05
    • /
    • pp.240-242
    • /
    • 1996
  • In this paper, we proposed a method of a hearing aid suitable for the sensorineural hearing impaired. Generally as the sensorineural hearing impaired have narrow audible ranges between threshold and discomfortable level, the speech spectrum may easily go beyond their audible range. Therefore speech spectrum must be optimally amplified and compressed into the impaired's audible range. The level and frequency of input speech signal are varied continuously. So we have to make compensation input signal for frequency-gain loss of the impaired, specially in the frequency band which includes much information. The input sigaal is divided into short time block and spectrum within the block is calculated. The frequency-gain characteristic is determined using the calculated spectrum. The number of frequency band and the target gain which will be added input signal are estimated. The input signal within the block is processed by a single digital filter with the calculated frequency-gain characteristics. From the results of monosyllabic speech tests to evaluate the performance of the proposed algorithm, the scores of test were improved.

  • PDF

Comparison of the Awareness Level of the Prolonged Sound Rules in Elementary School Students (초등학생의 연음규칙 인식수준 비교)

  • Lee, Eun-Seon;Seok, Dong-Il
    • Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.109-120
    • /
    • 2005
  • The purpose of this study was to examine the awareness level of the prolonged sound rules in elementary school students. The participants in this study were 148 elementary school students in grades 1 through 6. Their awareness level of the prolonged sound rules was evaluated on reading tasks composed of 40 words and sentences. An one-way ANOVA followed by Scheffe hoc tests showed the showed the statistically significant difference between the groups for the awareness level of prolonged sound rules. The younger the children, the larger differences in the awareness of prolonged sound rules. The awareness might affect their cognitive development and a systematic instruction may be helpful especially in lower graders. In conclusion, the findings of this study showed significant differences in the awareness level of the prolonged sound rules at both words and sentences level. It was also found that as the age of children increases, the awareness level of the prolonged sound rules tends to increase which appeared to be completed by the 3rd grade.

  • PDF

A 3-Level Endpoint Detection Algorithm for Isolated Speech Using Time and Frequency-based Features

  • Eng, Goh Kia;Ahmad, Abdul Manan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1291-1295
    • /
    • 2004
  • This paper proposed a new approach for endpoint detection of isolated speech, which proves to significantly improve the endpoint detection performance. The proposed algorithm relies on the root mean square energy (rms energy), zero crossing rate and spectral characteristics of the speech signal where the Euclidean distance measure is adopted using cepstral coefficients to accurately detect the endpoint of isolated speech. The algorithm offers better performance than traditional energy-based algorithm. The vocabulary for the experiment includes English digit from one to nine. These experimental results were conducted by 360 utterances from a male speaker. Experimental results show that the accuracy of the algorithm is quite acceptable. Moreover, the computation overload of this algorithm is low since the cepstral coefficients parameters will be used in feature extraction later of speech recognition procedure.

  • PDF

Automatic Synthesis Method Using Prosody-Rich Database (대용량 운율 음성데이타를 이용한 자동합성방식)

  • 김상훈
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.87-92
    • /
    • 1998
  • In general, the synthesis unit database was constructed by recording isolated word. In that case, each boundary of word has typical prosodic pattern like a falling intonation or preboundary lengthening. To get natural synthetic speech using these kinds of database, we must artificially distort original speech. However, that artificial process rather resulted in unnatural, unintelligible synthetic speech due to the excessive prosodic modification on speech signal. To overcome these problems, we gathered thousands of sentences for synthesis database. To make a phone level synthesis unit, we trained speech recognizer with the recorded speech, and then segmented phone boundaries automatically. In addition, we used laryngo graph for the epoch detection. From the automatically generated synthesis database, we chose the best phone and directly concatenated it without any prosody processing. To select the best phone among multiple phone candidates, we used prosodic information such as break strength of word boundaries, phonetic contexts, cepstrum, pitch, energy, and phone duration. From the pilot test, we obtained some positive results.

  • PDF

SPATIAL EXPLANATIONS OF SPEECH PERCEPTION: A STUDY OF FRICATIVES

  • Choo, Won;Mark Huckvale
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.399-403
    • /
    • 1996
  • This paper addresses issues of perceptual constancy in speech perception through the use of a spatial metaphor for speech sound identity as opposed to a more conventional characterisation with multiple interacting acoustic cues. This spatial representation leads to a correlation between phonetic, acoustic and auditory analyses of speech sounds which can serve as the basis for a model of speech perception based on the general auditory characteristics of sounds. The correlations between the phonetic, perceptual and auditory spaces of the set of English voiceless fricatives /f $\theta$ s $\int$ h / are investigated. The results show that the perception of fricative segments may be explained in terms of 2-dimensional auditory space in which each segment occupies a region. The dimensions of the space were found to be the frequency of the main spectral peak and the 'peakiness' of spectra. These results support the view that perception of a segment is based on its occupancy of a multi-dimensional parameter space. In this way, final perceptual decisions on segments can be postponed until higher level constraints can also be met.

  • PDF

Syllable-Type-Based Phoneme Weighting Techniques for Listening Intelligibility in Noisy Environments (소음 환경에서의 명료한 청취를 위한 음절형태 기반 음소 가중 기술)

  • Lee, Young Ho;Joo, Jong Han;Choi, Seung Ho
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.165-169
    • /
    • 2014
  • Intelligibility of speech transmitted to listeners can significantly be degraded in noisy environments such as in auditorium and in train station due to ambient noises. Noise-masked speech signal is hard to be recognized by listeners. Among the conventional methods to improve speech intelligibility, consonant-vowel intensity ratio (CVR) approach reinforces the powers of overall consonants. However, excessively reinforced consonant is not helpful in recognition. Furthermore, only some of consonants are improved by the CVR approach. In this paper, we propose the corrective weighting (CW) approach that reinforces the powers of consonants according to syllable-type such as consonant-vowel-consonant (CVC), consonant-vowel (CV) and vowel-consonant (VC) in Korean differently, considering the level of listeners' recognition. The proposed CW approach was evaluated by the subjective test, Comparison Category Rating (CCR) test of ITU-T P.800, showed better performance, that is, 0.18 and 0.24 higher than the unprocessed CVR approach, respectively.

Secret Data Communication Method using Quantization of Wavelet Coefficients during Speech Communication (음성통신 중 웨이브렛 계수 양자화를 이용한 비밀정보 통신 방법)

  • Lee, Jong-Kwan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.302-305
    • /
    • 2006
  • In this paper, we have proposed a novel method using quantization of wavelet coefficients for secret data communication. First, speech signal is partitioned into small time frames and the frames are transformed into frequency domain using a WT(Wavelet Transform). We quantize the wavelet coefficients and embedded secret data into the quantized wavelet coefficients. The destination regard quantization errors of received speech as seceret dat. As most speech watermark techniques have a trade off between noise robustness and speech quality, our method also have. However we solve the problem with a partial quantization and a noise level dependent threshold. In additional, we improve the speech quality with de-noising method using wavelet transform. Since the signal is processed in the wavelet domain, we can easily adapt the de-noising method based on wavelet transform. Simulation results in the various noisy environments show that the proposed method is reliable for secret communication.

  • PDF

Evaluation of English speaking proficiency under fixed speech rate: Focusing on utterances produced by Korean child learners of English

  • Narah Choi;Tae-Yeoub Jang
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • This study attempted to test the hypothesis that Korean evaluators can score L2 speech appropriately, even when speech rate features are unavailable. Two perception experiments-preliminary and main-were conducted sequentially. The purpose of the preliminary experiment was to categorize English-as-a-foreign-language (EFL) speakers into two groups-advanced learners and lower-level learners-based on the proficiency scores given by five human raters. In the main experiment, a set of stimuli was prepared such that the speech rate of all data tokens was modified to have a uniform speech rate. Ten human evaluators were asked to score the stimulus tokens on a 5-point scale. These scores were statistically analyzed to determine whether there was a significant difference in utterance production between the two groups. The results of the preliminary experiment confirm that higher-proficiency learners speak faster than lower-proficiency learners. The results of the main experiment indicate that under controlled speech-rate conditions, human raters can appropriately assess learner proficiency, probably thanks to the linguistic features that the raters considered during the evaluation process.