• Title/Summary/Keyword: Speech sound

Search Result 628, Processing Time 0.03 seconds

Improving LD-CELP using frame classification and modified synthesis filter (프레임 분류와 합성필터의 변형을 이용한 적은 지연을 갖는 음성 부호화기의 성능)

  • 임은희;이주호;김형명
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.6
    • /
    • pp.1430-1437
    • /
    • 1996
  • A low delay code excited linear predictive speech coder(LD-CELP) at bit rates under 8kbps is considered. We try to improve the perfomance of speech coder with frame type dependent modification of synthesis filter. We first classify frames into 3 groups: voiced, unvoiced and onset. For voicedand unvoiced frame, the spectral envelope of the synthesis filter is adapted to the phonetic characteristics. For transition frame from unvoiced to voiced, the synthesis filter which has been interpolated with the bias filter is used. The proposed vocoder produced more clear sound with similar delay level than other pre-existing LD-CELP vocoders.

  • PDF

Nasal Place Detection with Acoustic Phonetic Parameters (음향음성학 파라미터를 사용한 비음 위치 검출)

  • Lee, Suk-Myung;Choi, Jeung-Yoon;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.6
    • /
    • pp.353-358
    • /
    • 2012
  • This paper describes acoustic phonetic parameters for detecting nasal place in a knowledge-based speech recognition system. Initial acoustic phonetic parameters are selected by studying nasal production mechanisms which are radiation of the sound through the nasal cavity. Nasals are produced with differing articulatory configuration which can be classified by measuring acoustic phonetic parameters such as band energy ratio, band energy differences, formants and formant differences. These acoustic phonetic parameters were tested in a classification experiment among labial nasal, alveolar nasal and velar nasal. An overall classification rate of 57.5% is obtained using the proposed acoustic phonetic parameters on the TIMIT database.

Acoustical Analysis of Phonological Reduction in Conversational Japanese (일본어 회화문에 나타난 축약형의 음운론적 해석과 음향음성학적 분석)

  • Choi, Young-Sook
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.229-241
    • /
    • 2001
  • Using eighteen texts from various genera of present-day Japanese, I collected phonologically reduced forms frequently observed in conversational Japanese, and classified them in search of a unified. explanation of phonological phenomena. I found 7,516 cases of reduced forms which I divided into 43 categories according to the types of phonological changes they have undergone. The general tendencies are that deletion and fusion of a phoneme or an entire syllable takes place frequently, resulting in the decrease in the number of syllables. From a morphosyntactic point of view, phonological reduction often occurs at the NP and VP morpheme boundaries. The following findings are drawn from phonetical observations of reduction. (1) Vowels are more easily deleted than consonants. (2) Bilabials ([m], [b], and [w]) are the most likely candidates for deletion. (3) In a concatenation of vowels, closed vowels are absorbed into open vowels, or two adjacent vowels come to create another vowel, in which case reconstruction of the original sequence is not always predictable. (4) Alveolars are palatalized under the influence of front vowels. (5) Regressive assimilation takes place in a syllable starting with [r], changing the entire syllable into a phonological choked sound or a syllabic nasal, depending on the voicing of the following phoneme.

  • PDF

Direction-of-Arrival Estimation of Speech Signals Based on MUSIC and Reverberation Component Reduction (MUSIC 및 반향 성분 제거 기법을 이용한 음성신호의 입사각 추정)

  • Chang, Hyungwook;Jeong, Sangbae;Kim, Youngil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1302-1309
    • /
    • 2014
  • In this paper, we propose a method to improve the performance of the direction-of-arrival (DOA) estimation of a speech source using a multiple signal classification (MUSIC)-based algorithm. Basically, the proposed algorithm utilizes a complex coefficient band pass filter to generate the narrow band signals for signal analysis. Also, reverberation component reduction and quadratic function-based response approximation in MUSIC spatial spectrum are utilized to improve the accuracy of DOA estimation. Experimental results show that the proposed method outperforms the well-known generalized cross-correlation (GCC)-based DOA estimation algorithm in the aspect of the estimation error and success rate, respectively.Abstract should be placed here. These instructions give you guidelines for preparing papers for JICCE.

An Acoustic Study of English Non-Phoneme Schwa and the Korean Full Vowel /e/

  • Ahn, Soo-Woong
    • Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.93-105
    • /
    • 2000
  • The English schwa sound has special characteristics which are distinct from other vowels. It is non-phonemic and occurs only in an unstressed syllable. Compared with the English schwa, the Korean /e/ is a full vowel which has phonemic contrast. This paper had three aims. One was to see whether there is any relationship between English full vowels and their reduced vowel schwas. Second was to see whether there is any possible target in the English schwa sounds which are derived from different full vowels. The third was to compare the English non-phoneme vowel schwa and the Korean full vowel /e/ in terms of articulatory positions and duration. The study results showed that there is no relationship between each of the full vowels and its schwa. The schwa tended to converge into a possible target which was F1 456 and F2 1560. The Korean vowel /e/ seemed to have its distinct position speaker-individual which is different from the neutral tongue position. The evidence that the Korean /e/ is a back vowel was supported by the Seoul dialect speaker. In duration, the English schwa was much shorter than the full vowels, but there was no significant difference in length between the Korean /e/ and other Korean vowels.

  • PDF

A Study of Correlation Between Phonological Awareness and Word Identification Ability of Hearing Impaired Children (청각장애 아동의 음운인식 능력과 단어확인 능력의 상관연구)

  • Kim, Yu-Kyung;Kim, Mun-Jung;Ahn, Jong-Bok;Seok, Dong-Il
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • Hearing impairment children possess poor underlying perceptual knowledge of the sound system and show delayed development of segmental organization of that system. The purpose of this study was to investigate the relationship between phonological awareness ability and word identification ability in hearing impaired children. 14 children with moderately severe hearing loss participated in this study. All tasks were individually administered. Phonological awareness tests consisted of syllable blending, syllable segmentation, syllable deletion, body-coda discrimination, phoneme blending, phoneme segmentation and phoneme deletion. Close-set Monosyllabic Words(12 items) and lists 1 and 2 of open-set Monosyllabic Words in EARS-K were examined for word identification. Results of this study were as follows: First, from the phonological awareness task, the close-set word identification showed a high positive correlation with the coda discrimination, phoneme blending and phoneme deletion. The open-set word identification showed a high positive correlation with phoneme blending, phoneme deletion and phoneme segmentation. Second, from the level of phonological awareness, the close-set word identification showed a high positive correlation with the level of body-coda awareness and phoneme awareness while the open-set word identification showed a high positive correlation only with the level of phoneme awareness.

  • PDF

Development of Digital Endoscopic Data Management System (디지탈 내시경 데이터 management system의 개발)

  • Song, C.G.;Lee, S.M.;Lee, Y.M.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.304-306
    • /
    • 1996
  • Endoscopy has become a crucial diagnostic and theraputic procedure in clinical areas. Over the past three years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholesystectomy, peviscopic endometriosis, and surgical arthroscopy. In this study, we are developed computer system, which is composed of frame grabber, sound board, VCR control board, LAN card and EDMS(endoscopic data management software). Also, computer system has controled over peripheral instruments as a color video printer, video cassette recorder, and endoscopic input/output signals(image and doctor's speech). Also, we are developed one body system of camels control unit including an endoscopic miniature camera and light source. Our system offer unsurpassed image quality in terms of resolution and color fidelity. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards, namely: windows 3.1 as a operating system, TCP/IP as a network protocol and a time sequence based database that handles both an image and drctor's speech synchronized with endoscopic image.

  • PDF

Native language Interference in producing the Korean rhythmic structure: Focusing on Japanese (한국어 리듬구조에 미치는 L1의 영향: 일본인 학습자를 중심으로)

  • Yune, Youngsook
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.45-52
    • /
    • 2018
  • This study investigates the effect of Japanese (L1) on the production of the Korean rhythmic structure. Korean and Japanese have typologically different rhythmic structure as a syllable-timed language and mora-timed language, respectively. This rhythmic difference comes from the different phonological properties of the two languages. Due to this difference, Japanese speakers that are learning Korean may produce a different rhythm than native Korean speakers' rhythm. To investigate the influence of the native language's rhythm on the target language, we conducted an acoustic analysis using acoustic metrics such as %V, VarcoV, and VarcoS. Four Korean native speakers and ten advanced Japanese Korean learners participated in a production test. The analyzed material consisted of six Korean sentences that contained various syllable structures. The results showed that KS and JS's rhythms are different in %V as well as in VarcoV. In the case of VarcoS, significant rhythmic difference was observed in the VC and CVC syllable, in which the coda segment is nasal sound. This study allowed us to observe the influence of L1 on production of L2 rhythm.

Pitch trajectories of English vowels produced by American men, women, and children

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.31-37
    • /
    • 2018
  • Pitch trajectories reflect a continuous variation of vocal fold movements over time. This study examined the pitch trajectories of English vowels produced by 139 American English speakers, statistically analyzing their trajectories using the Generalized Additive Mixed Models (GAMMs). First, Praat was used to read the sound data of Hillenbrand et al. (1995). A pitch analysis script was then prepared, and six pitch values at the corresponding time points within each vowel segment were collected and checked. The results showed that the group of men produced the lowest pitch trajectories, followed by the groups of women, boys, then girls. The density line showed a bimodal distribution. The pitch values at the six corresponding time points formed a single dip, which changed gradually across the vowel segment from 204 to 193 to 196 Hz. The normality tests performed on the pitch data rejected the null hypothesis. Nonparametric tests were therefore conducted to discover the significant differences in the values among the four groups. The GAMMs, which analyzed all the pitch data, produced significant results among the pitch values at the six corresponding time points but not between the two groups of boys and girls. The GAMMs also revealed that the two groups were significantly different only at the first and second time points. Accordingly, the methodology of this study and its findings may be applicable to future studies comparing curvilinear data sets elicited by experimental conditions.

A comparison of normalized formant trajectories of English vowels produced by American men and women

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Formant trajectories reflect the continuous variation of speakers' articulatory movements over time. This study examined formant trajectories of English vowels produced by ninety-three American men and women; the values were normalized using the scale function in R and compared using generalized additive mixed models (GAMMs). Praat was used to read the sound data of Hillenbrand et al. (1995). A formant analysis script was prepared, and six formant values at the corresponding time points within each vowel segment were collected. The results indicate that women yielded proportionately higher formant values than men. The standard deviations of each group showed similar patterns at the first formant (F1) and the second formant (F2) axes and at the measurement points. R was used to scale the first two formant data sets of men and women separately. GAMMs of all the scaled formant data produced various patterns of deviation along the measurement points. Generally, more group difference exists in F1 than in F2. Also, women's trajectories appear more dynamic along the vertical and horizontal axes than those of men. The trajectories are related acoustically to F1 and F2 and anatomically to jaw opening and tongue position. We conclude that scaling and nonlinear testing are useful tools for pinpointing differences between speaker group's formant trajectories. This research could be useful as a foundation for future studies comparing curvilinear data sets.