• Title/Summary/Keyword: Speech function

Search Result 694, Processing Time 0.029 seconds

Interference Suppression Using Principal Subspace Modification in Multichannel Wiener Filter and Its Application to Speech Recognition

  • Kim, Gi-Bak
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.921-931
    • /
    • 2010
  • It has been shown that the principal subspace-based multichannel Wiener filter (MWF) provides better performance than the conventional MWF for suppressing interference in the case of a single target source. It can efficiently estimate the target speech component in the principal subspace which estimates the acoustic transfer function up to a scaling factor. However, as the input signal-to-interference ratio (SIR) becomes lower, larger errors are incurred in the estimation of the acoustic transfer function by the principal subspace method, degrading the performance in interference suppression. In order to alleviate this problem, a principal subspace modification method was proposed in previous work. The principal subspace modification reduces the estimation error of the acoustic transfer function vector at low SIRs. In this work, a frequency-band dependent interpolation technique is further employed for the principal subspace modification. The speech recognition test is also conducted using the Sphinx-4 system and demonstrates the practical usefulness of the proposed method as a front processing for the speech recognizer in a distant-talking and interferer-present environment.

A Study of the Pitch Estimation Algorithms of Speech Signal by Using Average Magnitude Difference Function (AMDF) (AMDF 함수를 이용한 음성 신호의 피치 추정 Algorithm들에 관한 연구)

  • So, Shinae;Lee, Kang Hee;You, Kwang-Bock;Lim, Ha-Young;Park, Jisu
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.4
    • /
    • pp.235-242
    • /
    • 2017
  • Peaks (or Nulls) finding algorithms for Average Magnitude Difference Function (AMDF) of speech signal are proposed in this paper. Both AMDF and Autocorrelation Function (ACF) are widely used to estimate a pitch of speech signal. It is well known that the estimation of the fundamental requency (F0) for speech signal is not only important but also very difficult. In this paper, two algorithms, are exploited the characteristics of AMDF, are proposed. First, the proposed algorithm which has a Threshold value is applied to the local minima to detect a pitch period. The Other proposed algorithm to estimate a pitch period of speech signal is utilized the relationship between AMDF and ACF. The data in this paper, is recorded by using general commercial device, is composed of Korean emotion expression words. The recorded speech data are applied to two proposed algorithms and tested their performance.

Implementation of Text-to-Audio Visual Speech Synthesis Using Key Frames of Face Images (키프레임 얼굴영상을 이용한 시청각음성합성 시스템 구현)

  • Kim MyoungGon;Kim JinYoung;Baek SeongJoon
    • MALSORI
    • /
    • no.43
    • /
    • pp.73-88
    • /
    • 2002
  • In this paper, for natural facial synthesis, lip-synch algorithm based on key-frame method using RBF(radial bases function) is presented. For lips synthesizing, we make viseme range parameters from phoneme and its duration information that come out from the text-to-speech(TTS) system. And we extract viseme information from Av DB that coincides in each phoneme. We apply dominance function to reflect coarticulation phenomenon, and apply bilinear interpolation to reduce calculation time. At the next time lip-synch is performed by playing the synthesized images obtained by interpolation between each phonemes and the speech sound of TTS.

  • PDF

Performance Enhancement of Speaker Identification in Noisy Environments by Optimization Membership Function Based on Particle Swarm (Particle Swarm 기반 최적화 멤버쉽 함수에 의한 잡음 환경에서의 화자인식 성능향상)

  • Min, So-Hee;Song, Min-Gyu;Na, Seung-You;Kim, Jin-Young
    • Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.105-114
    • /
    • 2007
  • The performance of speaker identifier is severely degraded in noisy environments. A study suggested the concept of observation membership for enhancing performances of speaker identifier with noisy speech [1]. The method scaled observation probabilities of input speech by observation identification values decided by SNR. In the paper [1], the authors suggested heuristic parameter values for membership function. In this paper we attempt to apply particle swarm optimization (PSO) for obtaining the optimal parameters for speaker identification in noisy environments. With the speaker identification experiments using the ETRI database we prove that the optimization approach can yield better performance than using only the original membership function.

  • PDF

An evaluation of Korean students' pronunciation of an English passage by a speech recognition application and two human raters

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.19-25
    • /
    • 2020
  • This study examined thirty-one Korean students' pronunciation of an English passage using a speech recognition application, Speechnotes, and two Canadian raters' evaluations of their speech according to the International English Language Testing System (IELTS) band criteria to assess the possibility of using the application as a teaching aid for pronunciation education. The results showed that the grand average percentage of correctly recognized words was 77.7%. From the moderate recognition rate, the pronunciation level of the participants was construed as intermediate and higher. The recognition rate varied depending on the composition of the content words and the function words in each given sentence. Frequency counts of unrecognized words by group level and word type revealed the typical pronunciation problems of the participants, including fricatives and nasals. The IELTS bands chosen by the two native raters for the rainbow passage had a moderately high correlation with each other. A moderate correlation was reported between the number of correctly recognized content words and the raters' bands, while an almost a negligible correlation was found between the function words and the raters' bands. From these results, the author concludes that the speech recognition application could constitute a partial aid for diagnosing each individual's or the group's pronunciation problems, but further studies are still needed to match human raters.

A Study on the Eavesdropping of the Glass Window Vibration in a Conference Room (회의실내 유리창 진동의 도청에 대한 연구)

  • Kim, Seock-Hyun;Kim, Yoon-Ho;Heo, Wook
    • Journal of Industrial Technology
    • /
    • v.27 no.A
    • /
    • pp.55-60
    • /
    • 2007
  • Possibility of the eavesdropping is investigated on a conference room-glass window coupled system. Speech intelligibility analysis is performed on the eavesdropping sound of the glass window. Using MLS(Maximum Length Sequency) signal as a sound source, acceleration and velocity responses of the glass window are measured by accelerometer and laser doppler vibrometer. MTF(Modulation Transfer Function) is used to identify the speech transmission characteristics of the room and window system. STI(Speech Transmission Index) is calculated by using MTF and speech intelligibility of the vibration sound is estimated. Speech intelligibilities by the acceleration signal and the velocity signal are compared.

  • PDF

Noise Reduction Using MMSE Estimator-based Adaptive Comb Filtering (MMSE Estimator 기반의 적응 콤 필터링을 이용한 잡음 제거)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • MALSORI
    • /
    • no.60
    • /
    • pp.181-190
    • /
    • 2006
  • This paper describes a speech enhancement scheme that leads to significant improvements in recognition performance when used in the ASR front-end. The proposed approach is based on adaptive comb filtering and an MMSE-related parameter estimator. While adaptive comb filtering reduces noise components remarkably, it is rarely effective in reducing non-stationary noises. Furthermore, due to the uniformly distributed frequency response of the comb-filter, it can cause serious distortion to clean speech signals. This paper proposes an improved comb-filter that adjusts its spectral magnitude to the original speech, based on the speech absence probability and the gain modification function. In addition, we introduce the modified comb filtering-based speech enhancement scheme for ASR in mobile environments. Evaluation experiments carried out using the Aurora 2 database demonstrate that the proposed method outperforms conventional adaptive comb filtering techniques in both clean and noisy environments.

  • PDF

Multi-Channel Speech Enhancement Algorithm Using DOA-based Learning Rate Control (DOA 기반 학습률 조절을 이용한 다채널 음성개선 알고리즘)

  • Kim, Su-Hwan;Lee, Young-Jae;Kim, Young-Il;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.91-98
    • /
    • 2011
  • In this paper, a multi-channel speech enhancement method using the linearly constrained minimum variance (LCMV) algorithm and a variable learning rate control is proposed. To control the learning rate for adaptive filters of the LCMV algorithm, the direction of arrival (DOA) is measured for each short-time input signal and the likelihood function of the target speech presence is estimated to control the filter learning rate. Using the likelihood measure, the learning rate is increased during the pure noise interval and decreased during the target speech interval. To optimize the parameter of the mapping function between the likelihood value and the corresponding learning rate, an exhaustive search is performed using the Bark's scale distortion (BSD) as the performance index. Experimental results show that the proposed algorithm outperforms the conventional LCMV with fixed learning rate in the BSD by around 1.5 dB.

  • PDF

Development of an Autonomous Mobile Robot with the Function of Teaching a Moving Path by Speech and Avoiding a Collision (음성에 의한 경로교시 기능과 충돌회피 기능을 갖춘 자율이동로봇의 개발)

  • Park, Min-Gyu;Lee, Min-Cheol;Lee, Suk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.8
    • /
    • pp.189-197
    • /
    • 2000
  • This paper addresses that the autonomous mobile robot with the function of teaching a moving path by speech and avoiding a collision is developed. The use of human speech as the teaching method provides more convenient user-interface for a mobile robot. In speech recognition system a speech recognition algorithm using neural is proposed to recognize Korean syllable. For the safe navigation the autonomous mobile robot needs abilities to recognize a surrounding environment and to avoid collision with obstacles. To obtain the distance from the mobile robot to the various obstacles in surrounding environment ultrasonic sensors is used. By the navigation algorithm the robot forecasts the collision possibility with obstacles and modifies a moving path if it detects a dangerous obstacle.

  • PDF

Progress, challenges, and future perspectives in genetic researches of stuttering

  • Kang, Changsoo
    • Journal of Genetic Medicine
    • /
    • v.18 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Speech and language functions are highly cognitive and human-specific features. The underlying causes of normal speech and language function are believed to reside in the human brain. Developmental persistent stuttering, a speech and language disorder, has been regarded as the most challenging disorder in determining genetic causes because of the high percentage of spontaneous recovery in stutters. This mysterious characteristic hinders speech pathologists from discriminating recovered stutters from completely normal individuals. Over the last several decades, several genetic approaches have been used to identify the genetic causes of stuttering, and remarkable progress has been made in genome-wide linkage analysis followed by gene sequencing. So far, four genes, namely GNPTAB, GNPTG, NAGPA, and AP4E1, are known to cause stuttering. Furthermore, thegeneration of mouse models of stuttering and morphometry analysis has created new ways for researchers to identify brain regions that participate in human speech function and to understand the neuropathology of stuttering. In this review, we aimed to investigate previous progress, challenges, and future perspectives in understanding the genetics and neuropathology underlying persistent developmental stuttering.