• Title/Summary/Keyword: speech separation

Search Result 88, Processing Time 0.033 seconds

Influence of Nasometer Structure on Nasalance for Speech Therapy (언어치료환자를 위한 비음측정기 모듈의 구조가 비음치 산출에 미치는 영향)

  • Woo, Seong Tak;Park, Y.B.;Kim, J.Y.;Oh, D.H.;Ha, J.W.;Na, S.D.;Kim, M.N.
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.157-166
    • /
    • 2019
  • With the development of medical technology, interest in rehabilitation devices is increasing and various devices are being studied. In particular, devices for speech disorders such as hearing impairment and cleft palate are attracting attention. In general, the nasometer is used for patients with flaccid dysarthria and velopharyngeal incompetence(VPI). However, in the case of the conventional separator type nasometer, that has an acoustic feedback problem between the oral and nasal sounds. In recent, the mask type nasometer has been developed which is insensitive to acoustic feedback. But, still not popularized. In this paper, the nasometer characteristics of the conventional separation type and mask type are analyzed. Also, We were obtained clinical acoustic data from the 6 subjects and examined the significant differences in the structure of the separation type and mask type nasometer. Through experiments, it was confirmed that the measurement was about 3~15% higher in the mask type nasometer than the conventional nasometer having a separator type. Also, We was considered the necessity of nasometer signal processing for acoustic feedback reduction and nasalance calculation optimization.

Blind Noise Separation Method of Convolutive Mixed Signals (컨볼루션 혼합신호의 암묵 잡음분리방법)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.409-416
    • /
    • 2022
  • This paper relates to the blind noise separation method of time-delayed convolutive mixed signals. Since the mixed model of acoustic signals in a closed space is multi-channel, a convolutive blind signal separation method is applied and time-delayed data samples of the two microphone input signals is used. For signal separation, the mixing coefficient is calculated using an inverse model rather than directly calculating the separation coefficient, and the coefficient update is performed by repeated calculations based on secondary statistical properties to estimate the speech signal. Many simulations were performed to verify the performance of the proposed blind signal separation. As a result of the simulation, noise separation using this method operates safely regardless of convolutive mixing, and PESQ is improved by 0.3 points compared to the general adaptive FIR filter structure.

Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech (한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리)

  • Hong, Mun-Ki;Shin, Ji-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

Two-Microphone Binary Mask Speech Enhancement in Diffuse and Directional Noise Fields

  • Abdipour, Roohollah;Akbari, Ahmad;Rahmani, Mohsen
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.772-782
    • /
    • 2014
  • Two-microphone binary mask speech enhancement (2mBMSE) has been of particular interest in recent literature and has shown promising results. Current 2mBMSE systems rely on spatial cues of speech and noise sources. Although these cues are helpful for directional noise sources, they lose their efficiency in diffuse noise fields. We propose a new system that is effective in both directional and diffuse noise conditions. The system exploits two features. The first determines whether a given time-frequency (T-F) unit of the input spectrum is dominated by a diffuse or directional source. A diffuse signal is certainly a noise signal, but a directional signal could correspond to a noise or speech source. The second feature discriminates between T-F units dominated by speech or directional noise signals. Speech enhancement is performed using a binary mask, calculated based on the proposed features. In both directional and diffuse noise fields, the proposed system segregates speech T-F units with hit rates above 85%. It outperforms previous solutions in terms of signal-to-noise ratio and perceptual evaluation of speech quality improvement, especially in diffuse noise conditions.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Speaker Separation Based on Directional Filter and Harmonic Filter (Directional Filter와 Harmonic Filter 기반 화자 분리)

  • Baek, Seung-Eun;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.125-136
    • /
    • 2005
  • Automatic speech recognition is much more difficult in real world. Speech recognition according to SIR (Signal to Interface Ratio) is difficult in situations in which noise of surrounding environment and multi-speaker exists. Therefore, study on main speaker's voice extractions a very important field in speech signal processing in binaural sound. In this paper, we used directional filter and harmonic filter among other existing methods to extract the main speaker's information in binaural sound. The main speaker's voice was extracted using directional filter, and other remaining speaker's information was removed using harmonic filter through main speaker's pitch detection. As a result, voice of the main speaker was enhanced.

  • PDF

AM-FM Decomposition and Estimation of Instantaneous Frequency and Instantaneous Amplitude of Speech Signals for Natural Human-robot Interaction (자연스런 인간-로봇 상호작용을 위한 음성 신호의 AM-FM 성분 분해 및 순간 주파수와 순간 진폭의 추정에 관한 연구)

  • Lee, He-Young
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.53-70
    • /
    • 2005
  • A Vowel of speech signals are multicomponent signals composed of AM-FM components whose instantaneous frequency and instantaneous amplitude are time-varying. The changes of emotion states cause the variation of the instantaneous frequencies and the instantaneous amplitudes of AM-FM components. Therefore, it is important to estimate exactly the instantaneous frequencies and the instantaneous amplitudes of AM-FM components for the extraction of key information representing emotion states and changes in speech signals. In tills paper, firstly a method decomposing speech signals into AM - FM components is addressed. Secondly, the fundamental frequency of vowel sound is estimated by the simple method based on the spectrogram. The estimate of the fundamental frequency is used for decomposing speech signals into AM-FM components. Thirdly, an estimation method is suggested for separation of the instantaneous frequencies and the instantaneous amplitudes of the decomposed AM - FM components, based on Hilbert transform and the demodulation property of the extended Fourier transform. The estimates of the instantaneous frequencies and the instantaneous amplitudes can be used for modification of the spectral distribution and smooth connection of two words in the speech synthesis systems based on a corpus.

  • PDF

Improvement of convergence speed in FDICA algorithm with weighted inner product constraint of unmixing matrix (분리행렬의 가중 내적 제한조건을 이용한 FDICA 알고리즘의 수렴속도 향상)

  • Quan, Xingri;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.17-25
    • /
    • 2015
  • For blind source separation of convolutive mixtures, FDICA(Frequency Domain Independent Component Analysis) algorithms are generally used. Since FDICA algorithm such as Sawada FDICA, IVA(Independent Vector Analysis) works on the frequency bin basis with a natural gradient descent method, it takes much time to converge. In this paper, we propose a new method to improve convergence speed in FDICA algorithm. The proposed method reduces the number of iteration drastically in the process of natural gradient descent method by applying a weighted inner product constraint of unmixing matrix. Experimental results have shown that the proposed method achieved large improvement of convergence speed without degrading the separation performance of the baseline algorithms.

Overlapped Subband-Based Independent Vector Analysis

  • Jang, Gil-Jin;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.30-34
    • /
    • 2008
  • An improvement to the existing blind signal separation (BSS) method has been made in this paper. The proposed method models the inherent signal dependency observed in acoustic object to separate the real-world convolutive sound mixtures. The frequency domain approach requires solving the well known permutation problem, and the problem had been successfully solved by a vector representation of the sources whose multidimensional joint densities have a certain amount of dependency expressed by non-spherical distributions. Especially for speech signals, we observe strong dependencies across neighboring frequency bins and the decrease of those dependencies as the bins become far apart. The non-spherical joint density model proposed in this paper reflects this property of real-world speech signals. Experimental results show the improved performances over the spherical joint density representations.

Double Talk Processing using Blind Signal Separation in Acoustic Echo Canceller (음향반향제거기에서 암묵신호분리를 이용한 동시통화처리)

  • Lee, Haengwoo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • This paper is on an acoustic echo canceller solving the double-talk problem by using the blind signal separation technology. The acoustic echo canceller may be deteriorated or diverged during the double-talk period. So we use the blind signal separation to detect the double talking by separating the near-end speech signal from the mixed microphone signal. The blind signal separation extracts the near-end signal from dual microphones by the iterative computations using the 2nd order statistical character in the closed reverberation environment. By this method, the acoustic echo canceller operates irrespective of the double-talking. We verified performances of the proposed acoustic echo canceller in the computer simulations. The results show that the acoustic echo canceller with this algorithm detects the double-talk periods well, and then operates stably without diverging of the coefficients after ending the double-talking. The merits are in the simplicity and stability.