• Title/Summary/Keyword: Speech signal processing

Search Result 331, Processing Time 0.02 seconds

On a Pitch Detection using Low Pass Filter with Variable Bandwidth Preprocessed (전처리된 가변대역폭 LPF에 의한 피치검출법)

  • 한진희
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.221-224
    • /
    • 1995
  • In speech signal processing, it is necessary to detect exactly the pitch. The algorithms of pitch extraction with have been proposed until now are difficult to detect pitches over wide range speech signals. In this paper, thus, we proposed a new pitch detection algorithm that used a low pass filter with variable bandwidth. It is the method that preprosses to find the first formant of speech signals by the FFT at each frame and detects the pitches for signals LPFed with the cut off frequency according to the first formant. Applying the method, we obtained the pitch contours, improving the accuracy of pitch detection in some noise environments.

  • PDF

Effective speech recognition system for patients with Parkinson's disease (파킨슨병 환자에 대한 효과적인 음성인식 시스템)

  • Huiyong, Bak;Ryul, Kim;Sangmin, Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.655-661
    • /
    • 2022
  • Since speech impairment is prevalent in patients with Parkinson's disease (PD), speech recognition systems suitable for these patients are needed. In this paper, we propose a speech recognition system that effectively recognizes the speech of patients with PD. The speech recognition system is firstly pre-trained with the Globalformer using the speech data from healthy people, and then fine-tuned using relatively small amount of speech data from the patient with PD. For this analysis, we used the speech dataset of healthy people built by AI hub and that of patients with PD collected at Inha University Hospital. As a result of the experiment, the proposed speech recognition system recognized the speech of patients with PD with Character Error Rate (CER) of 22.15 %, which was a better result compared to other methods.

A Study on Performance Evaluation of Hidden Markov Network Speech Recognition System (Hidden Markov Network 음성인식 시스템의 성능평가에 관한 연구)

  • 오세진;김광동;노덕규;위석오;송민규;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.30-39
    • /
    • 2003
  • In this paper, we carried out the performance evaluation of HM-Net(Hidden Markov Network) speech recognition system for Korean speech databases. We adopted to construct acoustic models using the HM-Nets modified by HMMs(Hidden Markov Models), which are widely used as the statistical modeling methods. HM-Nets are carried out the state splitting for contextual and temporal domain by PDT-SSS(Phonetic Decision Tree-based Successive State Splitting) algorithm, which is modified the original SSS algorithm. Especially it adopted the phonetic decision tree to effectively express the context information not appear in training speech data on contextual domain state splitting. In case of temporal domain state splitting, to effectively represent information of each phoneme maintenance in the state splitting is carried out, and then the optimal model network of triphone types are constructed by in the parameter. Speech recognition was performed using the one-pass Viterbi beam search algorithm with phone-pair/word-pair grammar for phoneme/word recognition, respectively and using the multi-pass search algorithm with n-gram language models for sentence recognition. The tree-structured lexicon was used in order to decrease the number of nodes by sharing the same prefixes among words. In this paper, the performance evaluation of HM-Net speech recognition system is carried out for various recognition conditions. Through the experiments, we verified that it has very superior recognition performance compared with the previous introduced recognition system.

  • PDF

Pulse-Coded Train and QRS Feature extraction Using Linear Prediction (선형예측법을 이용한 심전도 신호의 부호화와 특징추출)

  • Song, Chul-Gyu;Lee, Byung-Chae;Jeong, Kee-Sam;Lee, Myoung-Ho
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.175-178
    • /
    • 1992
  • This paper proposes a method called linear prediction (a high performant technique in digital speech processing) for analyzing digital ECG signals. There are several significant properties indicating that ECG signals have an important feature in the residual error signal obtained after processing by Durbin's linear prediction algorithm. The ECG signal classification puts an emphasis on the residual error signal. For each ECG's QRS complex. the feature for recognition is obtained from a nonlinear transformation which transforms every residual error signal to set of three states pulse-cord train relative to the original ECG signal. The pulse-cord train has the advantage of easy implementation in digital hardware circuits to achive automated ECG diagnosis. The algorithm performs very well feature extraction in arrythmia detection. Using this method, our studies indicate that the PVC (premature ventricular contration) detection has a at least 90 percent sensityvity for arrythmia data.

  • PDF

A Study on Stable Motion Control of Humanoid Robot with 24 Joints Based on Voice Command

  • Lee, Woo-Song;Kim, Min-Seong;Bae, Ho-Young;Jung, Yang-Keun;Jung, Young-Hwa;Shin, Gi-Soo;Park, In-Man;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.1
    • /
    • pp.17-27
    • /
    • 2018
  • We propose a new approach to control a biped robot motion based on iterative learning of voice command for the implementation of smart factory. The real-time processing of speech signal is very important for high-speed and precise automatic voice recognition technology. Recently, voice recognition is being used for intelligent robot control, artificial life, wireless communication and IoT application. In order to extract valuable information from the speech signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. Basic method used for extracting the features of the voice signal is to find the Mel frequency cepstral coefficients. Mel-frequency cepstral coefficients are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The reliability of voice command to control of the biped robot's motion is illustrated by computer simulation and experiment for biped walking robot with 24 joint.

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Korean ESL Learners' Perception of English Segments: a Cochlear Implant Simulation Study (인공와우 시뮬레이션에서 나타난 건청인 영어학습자의 영어 말소리 지각)

  • Yim, Ae-Ri;Kim, Dahee;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.91-99
    • /
    • 2014
  • Although it is well documented that patients with cochlear implant experience hearing difficulties when processing their first language, very little is known whether or not and to what extent cochlear implant patients recognize segments in a second language. This preliminary study examines how Korean learners of English identify English segments in a normal hearing and cochlear implant simulation conditions. Participants heard English vowels and consonants in the following three conditions: normal hearing condition, 12-channel noise vocoding with 0mm spectral shift, and 12-channel noise vocoding with 3mm spectral shift. Results confirmed that nonnative listeners could also retrieve spectral information from vocoded speech signal, as they recognized vowel features fairly accurately despite the vocoding. In contrast, the intelligibility of manner and place features of consonants was significantly decreased by vocoding. In addition, we found that spectral shift affected listeners' vowel recognition, probably because information regarding F1 is diminished by spectral shifting. Results suggest that patients with cochlear implant and normal hearing second language learners would experience different patterns of listening errors when processing their second language(s).

Utilization of Phase Information for Speech Recognition (음성 인식에서 위상 정보의 활용)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.9
    • /
    • pp.993-1000
    • /
    • 2015
  • Mel-Frequency Cepstral Coefficients(: MFCC) is one of the noble feature vectors for speech signal processing. An evident drawback in MFCC is that the phase information is lost by taking the magnitude of the Fourier transform. In this paper, we consider a method of utilizing the phase information by treating the magnitudes of real and imaginary components of FFT separately. By applying this method to speech recognition with FVQ/HMM, the speech recognition error rate is found to decrease compared to the conventional MFCC. By numerical analysis, we show also that the optimal value of MFCC components is 12 which come from 6 real and imaginary components of FFT each.

Study of the Noise Processing to Technique Speech Recognition System (음성인식 시스템에서의 잡음 제거 개선에 관한 연구)

  • 이창윤;이영훈
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.2
    • /
    • pp.73-78
    • /
    • 2002
  • Recognition system of noise processing technique. A method combining SNR normalization with RAS is considered as a noise Processing and the performance of the speech recognition system can be improved using other noise processing technique. Experiment of recognition system is the internal organs that using a general digital signal processor(TMS320C31). Recognition word set is composed of 60 command words for of Rce environment and order of computer. Simulation is considered as a colored noise of general environment. The results of experiment showed that the recognition word set gives 94.61% of efficiency of recognition at maximum in case of the combination of SNR normalization and spectral subtraction.

  • PDF

A study on sound source segregation of frequency domain binaural model with reflection (반사음이 존재하는 양귀 모델의 음원분리에 관한 연구)

  • Lee, Chai-Bong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.3
    • /
    • pp.91-96
    • /
    • 2014
  • For Sound source direction and separation method, Frequency Domain Binaural Model(FDBM) shows low computational cost and high performance for sound source separation. This method performs sound source orientation and separation by obtaining the Interaural Phase Difference(IPD) and Interaural Level Difference(ILD) in frequency domain. But the problem of reflection occurs in practical environment. To reduce this reflection, a method to simulate the sound localization of a direct sound, to detect the initial arriving sound, to check the direction of the sound, and to separate the sound is presented. Simulation results show that the direction is estimated to lie close within 10% from the sound source and, in the presence of the reflection, the level of the separation of the sound source is improved by higher Coherence and PESQ(Perceptual Evaluation of Speech Quality) and by lower directional damping than those of the existing FDBM. In case of no reflection, the degree of separation was low.