• Title/Summary/Keyword: fSpeech signal

Search Result 26, Processing Time 0.021 seconds

Korean ESL Learners' Perception of English Segments: a Cochlear Implant Simulation Study (인공와우 시뮬레이션에서 나타난 건청인 영어학습자의 영어 말소리 지각)

  • Yim, Ae-Ri;Kim, Dahee;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.91-99
    • /
    • 2014
  • Although it is well documented that patients with cochlear implant experience hearing difficulties when processing their first language, very little is known whether or not and to what extent cochlear implant patients recognize segments in a second language. This preliminary study examines how Korean learners of English identify English segments in a normal hearing and cochlear implant simulation conditions. Participants heard English vowels and consonants in the following three conditions: normal hearing condition, 12-channel noise vocoding with 0mm spectral shift, and 12-channel noise vocoding with 3mm spectral shift. Results confirmed that nonnative listeners could also retrieve spectral information from vocoded speech signal, as they recognized vowel features fairly accurately despite the vocoding. In contrast, the intelligibility of manner and place features of consonants was significantly decreased by vocoding. In addition, we found that spectral shift affected listeners' vowel recognition, probably because information regarding F1 is diminished by spectral shifting. Results suggest that patients with cochlear implant and normal hearing second language learners would experience different patterns of listening errors when processing their second language(s).

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

The Duration Feature of Acoustic Signals and Korean Speakers' Perception of English Stops (구간 신호 길이 자질과 한국인의 영어 파열음 지각)

  • Kim, Mun-Hyong;Jun, Jong-Sup
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.19-28
    • /
    • 2009
  • This paper reports experimental findings about the duration feature of the acoustic components of English stops in Korean speakers' voicing perception. In our experiment, 35 participants discriminated between recorded stimuli and digitally transformed stimuli with different duration features from the original stimuli. 72 sets of paired stimuli are generated to test the effects of the duration feature in various phonetic contexts. The result of our experiment is a complicated cross-tabulation with 540 cells defined by five categorical independent variables plus one response variable. To find a meaningful generalization out of this complex frequency table, we ran logit log-linear regression analyses. Surprisingly, we have found that there is no single effect of the duration feature in all phonetic contexts on Korean speakers' perception of the voicing contrasts of English stops. Instead, the logit log-linear analyses reveal that there are interaction effects among phonetic contexts (=C), the places of articulation of stops (=P), and the voicing contrast (=V), and among duration (=T), phonetic contexts, and the places of articulation. To put it in mathematical terms, the distribution of the data can be explained by a simple log-linear equation, logF=${\mu}+{\lambda}CPV+{\lambda}TCP$.

  • PDF

Implementation and Evaluation of Electroglottograph System (전기성문전도(EGG) 시스템의 개발 및 평가)

  • 김기련;김광년;왕수건;허승덕;이승훈;전계록;최병철;정동근
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.5
    • /
    • pp.343-349
    • /
    • 2004
  • Electroglottograph(EGG) is a signal recorded from the vocal cord vibration by measuring electrical impedance across the vocal folds through the neck skin. The purpose of this study was to develop EGG system and to evaluate possibility for the application on speech analysis and laryngeal disease diagnosis. EGG system was composed of two pairs of ring electrodes, tuned amplifier, phase sensitive detector, low pass filter, and auto-gain controller. It was designed to extract electric impedance after detecting by amplitude modulation method with 2.7MHz carrier signal. Extracted signals were transmitted through line-in of PC sound card, sampled and quantized. Closed Quotient(CQ), Speed Quotient(SQ), Speed Index(SI), fundamental frequency of vocal cord vibration(F0), pitch variability of vocal fold vibration (Jitter), and peak-to-peak amplitude variability of vocal fold vibration(Shimmer) were analyzed as EGG parameters. Experimental results were as follows: the faster vocal fold vibration, the higher values in CQ parameter and the lower values in SQ and SI parameters. EGG and speech signals had the same fundamental frequency. CQ, SQ, and SI were significantly different between normal subjects and patients with laryngeal cancer. These results suggest that it is possible to implement portable EGG system to monitor the function of vocal cord and to test functional changes of the glottis.

Automatic Recognition of Pitch Accents Using Time-Delay Recurrent Neural Network (시간지연 회귀 신경회로망을 이용한 피치 악센트 인식)

  • Kim, Sung-Suk;Kim, Chul;Lee, Wan-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.4E
    • /
    • pp.112-119
    • /
    • 2004
  • This paper presents a method for the automatic recognition of pitch accents with no prior knowledge about the phonetic content of the signal (no knowledge of word or phoneme boundaries or of phoneme labels). The recognition algorithm used in this paper is a time-delay recurrent neural network (TDRNN). A TDRNN is a neural network classier with two different representations of dynamic context: delayed input nodes allow the representation of an explicit trajectory F0(t), while recurrent nodes provide long-term context information that can be used to normalize the input F0 trajectory. Performance of the TDRNN is compared to the performance of a MLP (multi-layer perceptron) and an HMM (Hidden Markov Model) on the same task. The TDRNN shows the correct recognition of $91.9{\%}\;of\;pitch\;events\;and\;91.0{\%}$ of pitch non-events, for an average accuracy of $91.5{\%}$ over both pitch events and non-events. The MLP with contextual input exhibits $85.8{\%},\;85.5{\%},\;and\;85.6{\%}$ recognition accuracy respectively, while the HMM shows the correct recognition of $36.8{\%}\;of\;pitch\;events\;and\;87.3{\%}$ of pitch non-events, for an average accuracy of $62.2{\%}$ over both pitch events and non-events. These results suggest that the TDRNN architecture is useful for the automatic recognition of pitch accents.

MATERIALS AND METHODS FOR TEACHING INTONATION

  • Ashby, Michael
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.228-229
    • /
    • 1997
  • 1 Intonation is important. It cannot be ignored. To convince students of the importance of intonation, we can use sentences with two very different interpretations according to intonation. Example: "I thought it would rain" with a fallon "rain" means it did not rain, but with a fall on "thought" and a rise on "rain" it means that it did rain. 2 Although complex, intonation is structured. For both teacher and student, the big job of tackling intonation is made simpler by remembering that intonation can be analysed into systems and units. There are three main systems in English intonation: Tonality (division into phrases) Tonicity (selection of accented syllables) Tone (the choice of pitch movements) Examples: Tonality: My brother who lives in London is a doctor. Tonicity: Hello. How ARE you. Hello. How are YOU. Tone: Ways to say "Thank you" 3 In deciding what to teach, we must distinguish what is universal from what is specifically English. This is where contrastive studies of intonation are very valuable. Usually, for instance, division into phrases (tonality) works in broadly similar ways across languages. Some uses of pitch are also similar across languages - for example, very high pitch may signal excitement or urgency. 4 Although most people think that intonation is mainly about pitch (the tone system), actually accent placement (tonicity) is probably the single most important aspect of English intonation. This is because it is connected with information focus, and the effects on interpretation are very clear-cut. Example: They asked for coffee, so I made them coffee. (The second occurrence of "coffee" must not be accented). 5 Ear-training is the beginning of intonation training in the VeL approach. First, students learn to identify fall vs rise vs fall-rise. To begin with, single words are used, then phrases and sentences. When learning tones, the fIrst words used should have unstressed syllables after the stressed syllable (Saturday) to make the pitch movement clearer. 6 In production drills, the fIrst thing is to establish simple neutral patterns. There should be no drama or really special meanings. Simple drills can be used to teach important patterns: Example: A: Peter likes football B: Yes JOHN likes football TOO A: Mary rides a bike B: Yes JENny rides a bike TOO 7 The teacher must be systematic and let learners KNOW what they are learning. It is no good using new patterns and hoping that students will "pick them up" without noticing. 8 Visual feedback of fundamental frequency with a computer display can help students learn correct patterns. The teacher can use the display to demonstrate patterns, or students can practise by themselves, imitating recorded models.

  • PDF