• Title/Summary/Keyword: Non-speech

Search Result 467, Processing Time 0.023 seconds

The influence of utterance length on speech rate in spontaneous speech (자연발화 음성 코퍼스에서 발화 속도에 대한 발화 길이의 영향)

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.9-17
    • /
    • 2017
  • The current study examined speech rate and its variance in spontaneous Seoul Korean speech. The current study focused on factors affecting the variance of speech rate such as utterance length, individual speakers, and gender. The results revealed that, first, utterance length has a significant influence on speech rate. Longer utterances were spoken at a faster rate. Second, regarding the effect of utterance length, individual speakers differed significantly in their speaking rate. The variation between speakers and within speakers tended to increase as utterance length increases. Third, there were speakers' gender differences, indicating that males produced considerably faster speaking rate than females. Additionally, the current study implied that non-linguistic factors in spontaneous speech can affect the variance of speakers' speaking rate.

Comparison Research of Non-Target Sentence Rejection on Phoneme-Based Recognition Networks (음소기반 인식 네트워크에서의 비인식 대상 문장 거부 기능의 비교 연구)

  • Kim, Hyung-Tai;Ha, Jin-Young
    • MALSORI
    • /
    • no.59
    • /
    • pp.27-51
    • /
    • 2006
  • For speech recognition systems, rejection function as well as decoding function is necessary to improve the reliability. There have been many research efforts on out-of-vocabulary word rejection, however, little attention has been paid on non-target sentence rejection. Recently pronunciation approaches using speech recognition increase the need for non-target sentence rejection to provide more accurate and robust results. In this paper, we proposed filler model method and word/phoneme detection ratio method to implement non-target sentence rejection system. We made performance evaluation of filler model along to word-level, phoneme-level, and sentence-level filler models respectively. We also perform the similar experiment using word-level and phoneme-level word/phoneme detection ratio method. For the performance evaluation, the minimized average of FAR and FRR is used for comparing the effectiveness of each method along with the number of words of given sentences. From the experimental results, we got to know that word-level method outperforms the other methods, and word-level filler mode shows slightly better results than that of word detection ratio method.

  • PDF

A Cepstral Analysis of Breathy Voice with Vocal Fold Paralysis (성대마비로 인한 기식 음성에 대한 Cepstral 분석)

  • Kang, Young-Ae;Seong, Cheol-Jae
    • Phonetics and Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.89-94
    • /
    • 2012
  • The aim of this study is to investigate the usefulness of the parameter CPP (cepstral peak prominence) and LTAS (long term average spectrum) band energy for an analysis of breathy voice with vocal fold paralysis. Thirty-four female subjects who have vocal paralysis after thyroidectomy participated in this study. According to the perceptual judgements by three speech pathologists and one phonetic scholar, subjects were divided into two groups: breathy voice group (n = 21) and non-breathy voice group (n = 13). Maximum sustained phonation task was measured for acoustic analysis. CPP-related (i.e. mean F0, mean CPP, and mean CPPs) and LTAS-related (i.e. minimum, maximum, and mean) parameters were used. Independent samples t-test was conducted. Regarding CPP, there are significant differences in mean CPP and mean CPPs between groups. The values of mean CPP and CPPs in the non-breathy voice group are higher than those in the breathy voice group. The CPP could be regarded as the useful parameter for breathy voice analysis in the clinic. When it comes to LTAS, energy from 0 to 2 kHz are significantly different between groups. The minimum value of non-breathy group is lower than that of breathy group, whereas the maximum value of non-breathy group is higher. The frequency band below 2 kHz seems to be related to breathy voice.

Non-Dialog Section Detection for the Descriptive Video Service Contents Authoring (화면해설방송 저작을 위한 비 대사 구간 검출)

  • Jang, Inseon;Ahn, ChungHyun;Jang, Younseon
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.296-306
    • /
    • 2014
  • This paper addresses a problem of non-dialog section detection for the DVS authoring, the goal of which is to find meaningful section from the broadcasting audio, where audio description can be inserted. The broadcasting audio involves the presence of various sounds so that it first discriminates between speech and non-speech for each audio frame. Proposed method jointly exploits the inter-channels structure and speech source characteristics of the broadcasting audio whose number of channel is stereo. Also, rule based post-processing is finally applied to detect the non-dialog section whose length is appropriate for audio description. Proposed method provides more accurate detection compared to conventional method. Experimental results on real broadcasting contents show that qualitative superiority of the proposed method.

Acoustic Features of Phonatory Offset-Onset in the Connected Speech between a Female Stutterer and Non-Stutterers (연속구어 내 발성 종결-개시의 음향학적 특징 - 말더듬 화자와 비말더듬 화자 비교 -)

  • Han, Ji-Yeon;Lee, Ok-Bun
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.19-33
    • /
    • 2006
  • The purpose of this paper was to examine acoustical characteristics of phonatory offset-onset mechanism in the connected speech of female adults with stuttering and normal nonfluency. The phonatory offset-onset mechanism refers to the laryngeal articulatory gestures. Those gestures are required to mark word boundaries in phonetic contexts of the connected speech. This mechanism included 7 patterns based on the speech spectrogram. This study showed the acoustic features in the connected speech in the production of female adults with stuttering (n=1) and normal nonfluency (n=3). Speech tokens in V_V, V_H, and V_S contexts were selected for the analysis. Speech samples were recorded by Sound Forge, and the spectrographic analysis was conducted using Praat. Results revealed a stuttering (with a type of block) female exhibited more laryngealization gestures in the V_V context. Laryngealization gesture was more characterized by a complete glottal stop or glottal fry both in V_H and in V_S contexts. The results were discussed from theoretical and clinical perspectives.

  • PDF

Background Noise Classification in Noisy Speech of Short Time Duration Using Improved Speech Parameter (개량된 음성매개변수를 사용한 지속시간이 짧은 잡음음성 중의 배경잡음 분류)

  • Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1673-1678
    • /
    • 2016
  • In the area of the speech recognition processing, background noises are caused the incorrect response to the speech input, therefore the speech recognition rates are decreased by the background noises. Accordingly, a more high level noise processing techniques are required since these kinds of noise countermeasures are not simple. Therefore, this paper proposes an algorithm to distinguish between the stationary background noises or non-stationary background noises and the speech signal having short time duration in the noisy environments. The proposed algorithm uses the characteristic parameter of the improved speech signal as an important measure in order to distinguish different types of the background noises and the speech signals. Next, this algorithm estimates various kinds of the background noises using a multi-layer perceptron neural network. In this experiment, it was experimentally clear the estimation of the background noises and the speech signals.

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

An acoustic Doppler-based silent speech interface technology using generative adversarial networks (생성적 적대 신경망을 이용한 음향 도플러 기반 무 음성 대화기술)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.161-168
    • /
    • 2021
  • In this paper, a Silent Speech Interface (SSI) technology was proposed in which Doppler frequency shifts of the reflected signal were used to synthesize the speech signals when 40kHz ultrasonic signal was incident to speaker's mouth region. In SSI, the mapping rules from the features derived from non-speech signals to those from audible speech signals was constructed, the speech signals are synthesized from non-speech signals using the constructed mapping rules. The mapping rules were built by minimizing the overall errors between the estimated and true speech parameters in the conventional SSI methods. In the present study, the mapping rules were constructed so that the distribution of the estimated parameters is similar to that of the true parameters by using Generative Adversarial Networks (GAN). The experimental result using 60 Korean words showed that, both objectively and subjectively, the performance of the proposed method was superior to that of the conventional neural networks-based methods.

A Preliminary Study on Differences of Phonatory Offset-Onset between the Fluency and a Dysfluency (유창성과 비유창성 화자의 발성 종결-개시 차이에 관한 예비연구)

  • Han Ji-Yeon;Lee Ok-Bun
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.109-112
    • /
    • 2006
  • This study investigated the acoustical characteristics of phonatory offset-onset mechanisms. And this study shows the comparative results between non-stutterers (N=3) and a stutterer (N=1). Phonatory offset-onset means a laryngeal articulatory in the connected speech. In the phonetic context V_V), pattern 0(there is no changes) appeared in all subjects, and pattern 4(this indicate the trace of glottal fry and closure in spectrogram)was only in a Stutterer. In high vowels(/i/, /u/), pattern 3 and 4 appeared only in a stutterer. Although there is no common pattern among the non-stutterers, individual's preference pattern was founded. This study offers the key to an understanding of physiological movement on a block of stutter.

  • PDF

Development of articulatory estimation model using deep neural network (심층신경망을 이용한 조음 예측 모형 개발)

  • You, Heejo;Yang, Hyungwon;Kang, Jaekoo;Cho, Youngsun;Hwang, Sung Hah;Hong, Yeonjung;Cho, Yejin;Kim, Seohyun;Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.31-38
    • /
    • 2016
  • Speech inversion (acoustic-to-articulatory mapping) is not a trivial problem, despite the importance, due to the highly non-linear and non-unique nature. This study aimed to investigate the performance of Deep Neural Network (DNN) compared to that of traditional Artificial Neural Network (ANN) to address the problem. The Wisconsin X-ray Microbeam Database was employed and the acoustic signal and articulatory pellet information were the input and output in the models. Results showed that the performance of ANN deteriorated as the number of hidden layers increased. In contrast, DNN showed lower and more stable RMS even up to 10 deep hidden layers, suggesting that DNN is capable of learning acoustic-articulatory inversion mapping more efficiently than ANN.