• Title/Summary/Keyword: speech source

Search Result 281, Processing Time 0.026 seconds

A Study on the Eavesdropping of the Glass Window Vibration in a Conference Room (회의실내 유리창 진동의 도청에 대한 연구)

  • Kim, Seock-Hyun;Kim, Yoon-Ho;Heo, Wook
    • Journal of Industrial Technology
    • /
    • v.27 no.A
    • /
    • pp.55-60
    • /
    • 2007
  • Possibility of the eavesdropping is investigated on a conference room-glass window coupled system. Speech intelligibility analysis is performed on the eavesdropping sound of the glass window. Using MLS(Maximum Length Sequency) signal as a sound source, acceleration and velocity responses of the glass window are measured by accelerometer and laser doppler vibrometer. MTF(Modulation Transfer Function) is used to identify the speech transmission characteristics of the room and window system. STI(Speech Transmission Index) is calculated by using MTF and speech intelligibility of the vibration sound is estimated. Speech intelligibilities by the acceleration signal and the velocity signal are compared.

  • PDF

Investigation of the Speech Intelligibility of Classrooms Depending on the Sound Source Location

  • Kim Jeong Tai;Haan Chan-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.139-143
    • /
    • 2005
  • The present study aims to investigate the effects of speaker location on the speech intelligibility in a classroom. In order to this, acoustic measurements were undertaken in a classroom with three different sound source locations such as center of front wall (FC), both sides of front wall (FS) and the center of ceiling (CC). SPL, RT, $D_{50}$, RASTI were measured in the 9 measurement points with same sound power level of sound source and MLS was used as the sound source signal. Also, subjective listening tests were carried out using Korean language listening materials which were recorded in an anechoic chamber. The recorded syllables were replayed and recorded again in the classroom with same sound source at three different locations and listening tests were undertaken to 20 respondents who were asked to write the correct syllables which were recorded in the classroom. The results show that higher sound intelligibility ($D_{50}$ of $47\%$, RASTI of 0.56) was obtained when sound source was located at the FS. The results also show that high sound intelligibility was obtained at the area nearby walls.

A New MPEG Reference Model for Unified Speech and Audio Coding (통합 음성/오디오 부호화를 위한 새로운 MPEG 참조 모델)

  • Song, Jeong-Ook;Oh, Hyen-O;Kang, Hong-Goo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.74-80
    • /
    • 2010
  • Speech and audio codecs have been developed based on different type of coding technologies since they have different characteristics of signal and applications. In harmony with a convergence between broadcasting and telecommunication system, international organizations for standardization such as 3GPP and ISO/IEC MPEG have tried to compress and transmit multimedia signals using unified codecs. MPEG recently initiated an activity to standardize the USAC (Unified speech and audio coding). However, USAC RM (Reference model) software has been problematic since it has a complex hierarchy, many useless source codes and poor quality of the encoder. To solve these problems, this paper introduces a new RM software designed with an open source paradigm. It was presented at the MPEG meeting in April, 2010 and the source code was released in June.

Online blind source separation and dereverberation of speech based on a joint diagonalizability constraint (공동 행렬대각화 조건 기반 온라인 음원 신호 분리 및 잔향제거)

  • Yu, Ho-Gun;Kim, Do-Hui;Song, Min-Hwan;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.503-514
    • /
    • 2021
  • Reverberation in speech signals tends to significantly degrade the performance of the Blind Source Separation (BSS) system. Especially in online systems, the performance degradation becomes severe. Methods based on joint diagonalizability constraints have been recently developed to tackle the problem. To improve the quality of separated speech, in this paper, we add the proposed de-reverberation method to the online BSS algorithm based on the constraints in reverberant environments. Through experiments on the WSJCAM0 corpus, the proposed method was compared with the existing online BSS algorithm. The performance evaluation by the Signal-to-Distortion Ratio and the Perceptual Evaluation of Speech Quality demonstrated that SDR improved from 1.23 dB to 3.76 dB and PESQ improved from 1.15 to 2.12 on average.

Statistical Approaches to Convert Pitch Contour Based on Korean Prosodic Phrases (한국어 운율구 기반의 피치궤적 변환의 통계적 접근)

  • Lee, Ki-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1E
    • /
    • pp.10-15
    • /
    • 2004
  • In performing speech conversion from a source speaker to a target speaker, it is important that the pitch contour of the source speakers utterance be converted into that of the target speaker, because pitch contour of a speech utterance plays an important role in expressing speaker's individuality and meaning of the utterance. This paper describes statistical algorithms of pitch contour conversion for Korean language. Pitch contour conversions are investigated at two 1 evels of prosodic phrases: intonational phrase and accentual phrase. The basic algorithm is a Gaussian normalization [7] in intonational phrase. The first presented algorithm is combined with a declination-line of pitch contour in an intonational phrase. The second one is Gaussian normalization within accentual phrases to compensate for local pitch variations. Experimental results show that the algorithm of Gaussian normalization within accentual phrases is significantly more accurate than the other two algorithms in intonational phrase.

Speech Enhancement Using Blind Signal Separation Combined With Null Beamforming

  • Nam Seung-Hyon;Jr. Rodrigo C. Munoz
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4E
    • /
    • pp.142-147
    • /
    • 2006
  • Blind signal separation is known as a powerful tool for enhancing noisy speech in many real world environments. In this paper, it is demonstrated that the performance of blind signal separation can be further improved by combining with a null beamformer (NBF). Cascading the blind source separation with null beamforming is equivalent to the decomposition of the received signals into the direct parts and reverberant parts. Investigation of beam patterns of the null beamformer and blind signal separation reveals that directional null of NBF reduces mainly direct parts of the unwanted signals whereas blind signal separation reduces reverberant parts. Further, it is shown that the decomposition of received signals can be exploited to solve the local stability problem. Therefore, faster and improved separation can be obtained by removing the direct parts first by null beamforming. Simulation results using real office recordings confirm the expectation.

Analysis and Comparisons of Acoustical Characteristics of Pathologic Voice before and after Surgery (후두질환에 대한 술전 술후 음성의 음향적 특성비교 분석)

  • Kim, Dae-Hyun;Jo, Cheol-Woo;Baek, Moo- Jin;Wang, Soo-Geun
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.285-294
    • /
    • 2000
  • In this paper the acoustic characteristics of pathological voice, which are measured before and after surgical operation, are compared. This experiment is conducted for the purpose of predicting patients' speech after operation. The voices are recorded from the same patients. Jitter, shimmer and other parameters are. computed and their statistical characteristics are compared. Also spectral changes, such as formant frequency shift and spectral slope change, are compared. From the experimental results, it is verified that not only source characteristics but also vocal tract components vary. And this indicates that the modification of source parameters are not enough for the prediction. Also the result indicates that the operation causes change to both the physical shape of vocal folds and the manner of articulation.

  • PDF

Sound Localization Technique for Intelligent Service Robot 'WEVER' (지능형 로봇 '웨버'를 위한 음원 추적 기술)

  • Lee, Ji-Yeoun;Hahn, Min-Soo;Ji, Su-young;Cho, Young-Jo
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.117-120
    • /
    • 2005
  • This paper suggests an algorithm that can estimate the direction of the sound source in realtime. Our intelligent service robot, WEVER, is used to implement the proposed method at the home environment. The algorithm uses the time difference and sound intensity information among the recorded sound source by four microphones. Also, to deal with noise of robot itself, the kalman filter is implemented. The proposed method takes shorter execution time than that of an existing algorithm to fit the real-time service robot. The result shows relatively small error within the range of ${\pm}$ 7 degree.

  • PDF

GMM based Nonlinear Transformation Methods for Voice Conversion

  • Vu, Hoang-Gia;Bae, Jae-Hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.67-70
    • /
    • 2005
  • Voice conversion (VC) is a technique for modifying the speech signal of a source speaker so that it sounds as if it is spoken by a target speaker. Most previous VC approaches used a linear transformation function based on GMM to convert the source spectral envelope to the target spectral envelope. In this paper, we propose several nonlinear GMM-based transformation functions in an attempt to deal with the over-smoothing effect of linear transformation. In order to obtain high-quality modifications of speech signals our VC system is implemented using the Harmonic plus Noise Model (HNM)analysis/synthesis framework. Experimental results are reported on the English corpus, MOCHA-TlMlT.

  • PDF

Speech Basis Matrix Using Noise Data and NMF-Based Speech Enhancement Scheme (잡음 데이터를 활용한 음성 기저 행렬과 NMF 기반 음성 향상 기법)

  • Kwon, Kisoo;Kim, Hyung Young;Kim, Nam Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.619-627
    • /
    • 2015
  • This paper presents a speech enhancement method using non-negative matrix factorization (NMF). In the training phase, each basis matrix of source signal is obtained from a proper database, and these basis matrices are utilized for the source separation. In this case, the performance of speech enhancement relies heavily on the basis matrix. The proposed method for which speech basis matrix is made a high reconstruction error for noise signal shows a better performance than the standard NMF which basis matrix is trained independently. For comparison, we propose another method, and evaluate one of previous method. In the experiment result, the performance is evaluated by perceptual evaluation speech quality and signal to distortion ratio, and the proposed method outperformed the other methods.