• Title/Summary/Keyword: Non-speech Sounds

Search Result 22, Processing Time 0.023 seconds

Non-Dialog Section Detection for the Descriptive Video Service Contents Authoring (화면해설방송 저작을 위한 비 대사 구간 검출)

  • Jang, Inseon;Ahn, ChungHyun;Jang, Younseon
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.296-306
    • /
    • 2014
  • This paper addresses a problem of non-dialog section detection for the DVS authoring, the goal of which is to find meaningful section from the broadcasting audio, where audio description can be inserted. The broadcasting audio involves the presence of various sounds so that it first discriminates between speech and non-speech for each audio frame. Proposed method jointly exploits the inter-channels structure and speech source characteristics of the broadcasting audio whose number of channel is stereo. Also, rule based post-processing is finally applied to detect the non-dialog section whose length is appropriate for audio description. Proposed method provides more accurate detection compared to conventional method. Experimental results on real broadcasting contents show that qualitative superiority of the proposed method.

An ERP Study of the Perception of English High Front Vowels by Native Speakers of Korean and English (영어전설고모음 인식에 대한 ERP 실험연구: 한국인과 영어원어민을 대상으로)

  • Yun, Yungdo
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.21-29
    • /
    • 2013
  • The mismatch negativity (MMN) is known to be a fronto-centrally negative component of the auditory event-related potentials (ERP). $N\ddot{a}\ddot{a}t\ddot{a}nen$ et al. (1997) and Winkler et al. (1999) discuss that MMN acts as a cue to a phoneme perception in the ERP paradigm. In this study a perception experiment based on an ERP paradigm to check how Korean and American English speakers perceive the American English high front vowels was conducted. The study found that the MMN obtained from both Korean and American English speakers was shown around the same time after they heard F1s of English high front vowels. However, when the same groups heard English words containing them, the American English listeners' MMN was shown to be a little faster than the Korean listeners' MMN. These findings suggest that non-speech sounds, such as F1s of vowels, may be processed similarly across speakers of different languages; however, phonemes are processed differently; a native language phoneme is processed faster than a non-native language phoneme.

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

Non-word repetition may reveal different errors in naive listeners and second language learners

  • Holliday, Jeffrey J.;Hong, Minkyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • The perceptual assimilation of a nonnative phonological contrast can change with linguistic experience, resulting in naïve listeners and novice second language (L2) learners potentially assimilating the members of a nonnative contrast to different native (L1) categories. While it has been shown that this sort of change can affect the discrimination of the nonnative contrast, it has not been tested whether such a change could have consequences for the production of the contrast. In this study, L1 speakers of Mandarin Chinese who were (1) naïve to Korean, (2) novice L2 learners, or (3) advanced L2 learners participated in a Korean non-word repetition task using word-initial sibilants. The initial CVs of their repetitions were then played to L1 Korean listeners who categorized the initial consonant. The naïve talkers were more likely to repeat an initial /sha/ as an affricate, whereas the L2 learners repeated it as a fricative, in line with how these listeners have been shown to assimilate Korean sibilants to Mandarin categories. This result suggests that errors in the production of new words presented auditorily to nonnative listeners may be driven by how they perceptually assimilate the nonnative sounds, emphasizing the need to better understand what drives changes in perceptual assimilation that accompany increased linguistic experience.

An Acoustic Study of English Non-Phoneme Schwa and the Korean Full Vowel /e/

  • Ahn, Soo-Woong
    • Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.93-105
    • /
    • 2000
  • The English schwa sound has special characteristics which are distinct from other vowels. It is non-phonemic and occurs only in an unstressed syllable. Compared with the English schwa, the Korean /e/ is a full vowel which has phonemic contrast. This paper had three aims. One was to see whether there is any relationship between English full vowels and their reduced vowel schwas. Second was to see whether there is any possible target in the English schwa sounds which are derived from different full vowels. The third was to compare the English non-phoneme vowel schwa and the Korean full vowel /e/ in terms of articulatory positions and duration. The study results showed that there is no relationship between each of the full vowels and its schwa. The schwa tended to converge into a possible target which was F1 456 and F2 1560. The Korean vowel /e/ seemed to have its distinct position speaker-individual which is different from the neutral tongue position. The evidence that the Korean /e/ is a back vowel was supported by the Seoul dialect speaker. In duration, the English schwa was much shorter than the full vowels, but there was no significant difference in length between the Korean /e/ and other Korean vowels.

  • PDF

Quantitative Evaluation of the Performance of Monaural FDSI Beamforming Algorithm using a KEMAR Mannequin (KEMAR 마네킹을 이용한 단이 보청기용 FDSI 빔포밍 알고리즘의 정량적 평가)

  • Cho, Kyeongwon;Nam, Kyoung Won;Han, Jonghee;Lee, Sangmin;Kim, Dongwook;Hong, Sung Hwa;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.24-33
    • /
    • 2013
  • To enhance the speech perception of hearing aid users in noisy environment, most hearing aid devices adopt various beamforming algorithms such as the first-order differential microphone (DM1) and the two-stage directional microphone (DM2) algorithms that maintain sounds from the direction of the interlocutor and reduce the ambient sounds from the other directions. However, these conventional algorithms represent poor directionality ability in low frequency area. Therefore, to enhance the speech perception of hearing aid uses in low frequency range, our group had suggested a fractional delay subtraction and integration (FDSI) algorithm and estimated its theoretical performance using computer simulation in previous article. In this study, we performed a KEMAR test in non-reverberant room that compares the performance of DM1, DM2, broadband beamforming (BBF), and proposed FDSI algorithms using several objective indices such as a signal-to-noise ratio (SNR) improvement, a segmental SNR (seg-SNR) improvement, a perceptual evaluation of speech quality (PESQ), and an Itakura-Saito measure (IS). Experimental results showed that the performance of the FDSI algorithm was -3.26-7.16 dB in SNR improvement, -1.94-5.41 dB in segSNR improvement, 1.49-2.79 in PESQ, and 0.79-3.59 in IS, which demonstrated that the FDSI algorithm showed the highest improvement of SNR and segSNR, and the lowest IS. We believe that the proposed FDSI algorithm has a potential as a beamformer for digital hearing aid devices.

Attentional modulation on multiple acoustic cues in phonological processing of L2 sounds

  • Hyunjung Lee;Eun Jong Kong
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.11-16
    • /
    • 2023
  • The present study examines how a cognitive attention affects Korean learners of English (L2) in perceiving the English stop voicing distinction (/d/-/t/). This study tested the effect of attentional distractor on primary and non-primary acoustic cues, focusing on the role of Voice Onset Time (VOT) and fundamental frequency (F0). Using the dual-task paradigm, 28 Korean adult learners of English participated in the stop identification task carried with (distractor) and without (no-distractor) arithmetic calculation. Results showed that when distracted, Korean learners' sensitivity to VOT decreased as priorly reported with native English speakers. Furthermore, as F0 is a primary cue for a L1 Korean stop laryngeal contrast, its role in L2 English voicing distinction was also affected by a distractor, without compensating for the reduced VOT sensitivity. These findings suggest that flexible use of multiple cues in L1 is not necessarily beneficial for L2 phonological processing when coping with a adverse listening condition.

Analysis of acoustical characteristic changes in voice after drinking and singing (음주 및 가창 후 음성의 음향학적 특성 변화 분석)

  • Hwang, Bo-Myung;Noh, Dong-Woo;Paik, Eun-A;Jeong, Ok-Ran
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.39-48
    • /
    • 2001
  • The purpose of this study was to examine changes in acoustic characteristics after drinking alcoholic beverages and singing in order to establish guidelines for vocal hygiene of both singers and non-singers. 21 university students (10 males and 11 females) vocalized /a/ before drinking, after drinking and after singing. Changes in vocal range and acoustic characteristics were analyzed by Dr. Speech 4.0 (Tigers Electronics). No significant difference was observed in vocal range following drinking. However, there was statistically significant changes in vocal range after singing. We may infer that appropriate amount of singing functioning as vocal warm-up, rather than drinking alone, resulted in improvement in their abilities to lengthen vocal folds. This is directly related to the ability to produce high-pitched sounds. Changes in jitter in female voices after singing was the only acoustic factor that was significant. Changes in Shimmer and NNE was not significant either after drinking nor singing. Subjects who were judged to perform better in singing were marked by minimum acoustic changes, which may due to their well-trained vocal fold function. The results of this study may address the necessity for vocal function exercises for the patients with neurogenic voice disorders including dysarthria. The need for more extensive research with a larger number of subjects including professional voice users is also addressed.

  • PDF

Heteronyms in modern Korean and their transcription in the IPA and the Roman alphabet (우리말 동철이음어(同綴異音語) IPA.로마자 표기 (사~섬))

  • Youe MahnGunn
    • MALSORI
    • /
    • no.37
    • /
    • pp.49-71
    • /
    • 1999
  • The Purpose of this paper is to gather pairs of heteronyms in modern Korean and transcribe them in the IPA and the Roman alphabet in order to propose that all of them should be differentiated in Hanngul orthography. More than a quarter of the whole Korean vocabulary consists of words with a long vowel and the number of minimal pairs distinguished only by the chroneme reaches nearly ten thousand (i.e. twenty thousand words). The letter h syllable-finally is used here to represent the long vowel in Romanization except the vowel '으‘[?:] which is transcribed by doubling the letter u (i.e. uu). Another factor bringing forth lots of heteronyms in Korean is the lack of full indication as to the non-automatic reinforcement in the initial consonant of a word (or a morpheme) when preceded by another within a phrase (or a word). These reinforced word-initial consonants are written with the letter c and an apostrophe (like c'g- , c'd- , c'b-, c's-, c'j-) in Romanization here. The reinforced morpheme-initial consonant within a word is written with the letters k t, p, ss and cz for ㄲ, ㄸ, ㅃ, ㅆ and ㅉ sounds respectively. The contrasted pronunciations of pairs of heteronyms beginning with ㅅ /s/sup h// and ㅆ /s/ sounds are transcribed here for exemplification.

  • PDF

A Study on Number sounds Speaker recognition using the Pitch detection and the Fuzzified pattern (피치 검출과 퍼지화 패턴을 이용한 숫자음 화자 인식에 관한 연구)

  • 김연숙;김희주;김경재
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.3
    • /
    • pp.73-79
    • /
    • 2003
  • This paper proposes speaker recognition algorithm which includes both the pitch detection and the fuzzified pattern matching. This study utilizes pitch pattern using a pitch and speech parameter uses binary spectrum. In this paper. makes reference pattern using fuzzy membership function in order to include time variation width for non-utterance time and performs vocal track recognition of common character using fuzzified pattern matching.

  • PDF