• Title/Summary/Keyword: linear predictive coding

Search Result 71, Processing Time 0.036 seconds

Noise Reduction Algorithm in Speech by Wiener Filter (위너필터에 의한 음성 중의 잡음제거 알고리즘)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.9
    • /
    • pp.1293-1298
    • /
    • 2013
  • This paper proposes a noise reduction algorithm using Wiener filter to remove the noise components from the noisy speech in order to improve the speech signal. The proposed algorithm first removes the noise spectrums of white noise from the noisy signal based on the noise reshaping and reduction method at each frame. And this algorithm enhances the speech signal using Wiener filter based on linear predictive coding analysis. In this experiment, experimental results of the proposed algorithm demonstrate using the speech and noise data by Japanese male speaker. Based on measuring the spectral distortion (SD) measure, experiments confirm that the proposed algorithm is effective for the speech by contaminated white noise. From the experiments, the maximum improvement in the output SD values was 4.94 dB better for white noise compared with former Wiener filter.

Design and Implementation of Simple Text-to-Speech System using Phoneme Units (음소단위를 이용한 소규모 문자-음성 변환 시스템의 설계 및 구현)

  • Park, Ae-Hee;Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.49-60
    • /
    • 1995
  • This paper is a study on the design and implementation of the Korean Text-to-Speech system which is used for a small and simple system. In this paper, a parameter synthesis method is chosen for speech syntheiss method, we use PARCOR(PARtial autoCORrelation) coefficient which is one of the LPC analysis. And we use phoneme for synthesis unit which is the basic unit for speech synthesis. We use PARCOR, pitch, amplitude as synthesis parameter of voice, we use residual signal, PARCOR coefficients as synthesis parameter of unvoice. In this paper, we could obtain the 60% intelligibility by using the residual signal as excitation signal of unvoiced sound. The result of synthesis experiment, synthesis of a word unit is available. The controlling of phoneme duration is necessary for synthesizing of a sentence unit. For setting up the synthesis system, PC 486, a 70[Hz]-4.5[KHz] band pass filter for speech input/output, amplifier, and TMS320C30 DSP board was used.

  • PDF

Classification of Transient Signals in Ocean Background Noise Using Bayesian Classifier (베이즈 분류기를 이용한 수중 배경소음하의 과도신호 분류)

  • Kim, Ju-Ho;Bok, Tae-Hoon;Paeng, Dong-Guk;Bae, Jin-Ho;Lee, Chong-Hyun;Kim, Seong-Il
    • Journal of Ocean Engineering and Technology
    • /
    • v.26 no.4
    • /
    • pp.57-63
    • /
    • 2012
  • In this paper, a Bayesian classifier based on PCA (principle component analysis) is proposed to classify underwater transient signals using $16^{th}$ order LPC (linear predictive coding) coefficients as feature vector. The proposed classifier is composed of two steps. The mechanical signals were separated from biological signals in the first step, and then each type of the mechanical signal was recognized in the second step. Three biological transient signals and two mechanical signals were used to conduct experiments. The classification ratios for the feature vectors of biological signals and mechanical signals were 94.75% and 97.23%, respectively, when all 16 order LPC vector were used. In order to determine the effect of underwater noise on the classification performance, underwater ambient noise was added to the test signals and the classification ratio according to SNR (signal-to-noise ratio) was compared by changing dimension of feature vector using PCA. The classification ratios of the biological and mechanical signals under ocean ambient noise at 10dB SNR, were 0.51% and 100% respectively. However, the ratios were changed to 53.07% and 83.14% when the dimension of feature vector was converted to three by applying PCA. For correct, classification, it is required SNR over 10 dB for three dimension feature vector and over 30dB SNR for seven dimension feature vector under ocean ambient noise environment.

Comparison of Characteristic Vector of Speech for Gender Recognition of Male and Female (남녀 성별인식을 위한 음성 특징벡터의 비교)

  • Jeong, Byeong-Goo;Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1370-1376
    • /
    • 2012
  • This paper proposes a gender recognition algorithm which classifies a male or female speaker. In this paper, characteristic vectors for the male and female speaker are analyzed, and recognition experiments for the proposed gender recognition by a neural network are performed using these characteristic vectors for the male and female. Input characteristic vectors of the proposed neural network are 10 LPC (Linear Predictive Coding) cepstrum coefficients, 12 LPC cepstrum coefficients, 12 FFT (Fast Fourier Transform) cepstrum coefficients and 1 RMS (Root Mean Square), and 12 LPC cepstrum coefficients and 8 FFT spectrum. The proposed neural network trained by 20-20-2 network are especially used in this experiment, using 12 LPC cepstrum coefficients and 8 FFT spectrum. From the experiment results, the average recognition rates obtained by the gender recognition algorithm is 99.8% for the male speaker and 96.5% for the female speaker.

$F_2$ Formant Frequency Characteristics of the Aging Male and Female Speakers (한국어 모음에서 연령증가에 따른 제2음형대의 변화양상)

  • 김찬우;차흥억;장일환;김선태;오승철;석윤식;이영숙
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.10 no.2
    • /
    • pp.119-123
    • /
    • 1999
  • Background and Objectives : Conditions such as muscle atrophy, stretching of strap muscles, and continued craniofacial growth factors have been cited as contributing to the changes observed in the vocal tract structure and function in elderly speakers. The purpose of the present study is to compare F$_1$ and F$_2$ frequency levels in elderly and young adult male and female speakers producing a series of vowels ranging from high-front to low-back placement. Material and Methods : The subjects were two groups of young adults(10 males, 10 females, mean age 21 years old range 19-24 years) and two groups of elderly speakers(10 males, 10 females, mean age 67 years : range 60-84 years). Each subject participated in speech pathologist to be a speaker of unimpared standard Korean. The headphone was positioned 2 cm from the speakers lips. Each speaker sustained the five vowels for 5 s. Formant frequency measures were obtained from an analysis of linear predictive coding in CSL model 4300B(Kay co). Results : Repeated measure AVOVA procedures were completed on the $F_1$ and $F_2$ data for the male and female speakers. $F_2$ formant frequency levels were proven to be significantly lower fir elderly speakers. Conclusions : We presume $F_2$ vocal cavity(from the point of tongue constriction to lip) lengthening in elderly speakers. The research designed to observe dynamic speech production more directly will be needed.

  • PDF

Sustained Vowel Modeling using Nonlinear Autoregressive Method based on Least Squares-Support Vector Regression (최소 제곱 서포트 벡터 회귀 기반 비선형 자귀회귀 방법을 이용한 지속 모음 모델링)

  • Jang, Seung-Jin;Kim, Hyo-Min;Park, Young-Choel;Choi, Hong-Shik;Yoon, Young-Ro
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.957-963
    • /
    • 2007
  • In this paper, Nonlinear Autoregressive (NAR) method based on Least Square-Support Vector Regression (LS-SVR) is introduced and tested for nonlinear sustained vowel modeling. In the database of total 43 sustained vowel of Benign Vocal Fold Lesions having aperiodic waveform, this nonlinear synthesizer near perfectly reproduced chaotic sustained vowels, and also conserved the naturalness of sound such as jitter, compared to Linear Predictive Coding does not keep these naturalness. However, the results of some phonation are quite different from the original sounds. These results are assumed that single-band model can not afford to control and decompose the high frequency components. Therefore multi-band model with wavelet filterbank is adopted for substituting single band model. As a results, multi-band model results in improved stability. Finally, nonlinear sustained vowel modeling using NAR based on LS-SVR can successfully reconstruct synthesized sounds nearly similar to original voiced sounds.

Classification of Whale Sounds using LPC and Neural Networks (신경망과 LPC 계수를 이용한 고래 소리의 분류)

  • An, Woo-Jin;Lee, Eung-Jae;Kim, Nam-Gyu;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.43-48
    • /
    • 2017
  • The underwater transients signals contain the characteristics of complexity, time varying, nonlinear, and short duration. So it is very hard to model for these signals with reference patterns. In this paper we separate the whole length of signals into some short duration of constant length with overlapping frame by frame. The 20th LPC(Linear Predictive Coding) coefficients are extracted from the original signals using Durbin algorithm and applied to neural network. The 65% of whole signals were learned and 35% of the signals were tested in the neural network with two hidden layers. The types of the whales for sound classification are Blue whale, Dulsae whale, Gray whale, Humpback whale, Minke whale, and Northern Right whale. Finally, we could obtain more than 83% of classification rate from the test signals.

  • PDF

Relationship between Formants and Constriction Areas of Vocal Tract in 9 Korean Standard Vowels (우리말 모음의 발음시 음형대와 조음위치의 관계에 대한 연구)

  • 서경식;김재영;김영기
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.5 no.1
    • /
    • pp.44-58
    • /
    • 1994
  • The formants of the 9 Korean standard vowels(which used by the average people of Seoul, central-area of the Korean peninsula) were measured by analysis with the linear predictive coding(LPC) and fast Fourier transform(FFT). The author already had reported the constriction area for the Korean standard vowels, and with the existing data, the distance from glottis to the constriction area in the vocal tract of each vowel was newly measured with videovelopharyngograms and lateral Rontgenograms of the vocal tract. We correlated the formant frequencies with the distance from glottis to the constriction area of the vocal tract. Also we tried to correlate the formant frequencies with the position of tongue in the vocal tract which is divided into 2 categories : The position of tongue in oral cavity by the distance from imaginary palatal line to the highest point of tongue and the position in pharyngeal cavity by the distance from back of tongue to posterior pharyngeal wall. This study was performed with 10 adults(male : 5, female : 5) who spoke primary 9 Korean standard vowels. We had already reported that the Korean vowel [i], [e], $[{\varepsilon}]$ were articulated at hard palate level, [$\dot{+}$], [u] were at soft palate level, [$\wedge$] was at upper pharynx level and the [$\wedge$], [$\partial$], [a] in a previous article. Also we had noted that the significance of pharyngeal cavity in vowel articulation. From this study we have concluded that ; 1) The F$_1$ is related with the oral cavity articulated vowel [i, e, $\varepsilon$, $\dot{+}$, u]. 2) Within the oral cavity articulated vowel [i, e, $\varepsilon$, $\dot{+}$, u] and the upper pharynx articulated vowel [o], the F$_2$ is elevated when the diatance from glottis to the constriction area is longer. But within the lower pharynx articulated vowel [$\partial$, $\wedge$, a], the F$_2$ is elevated when the distance from glottis to the constriction area is shorter. 3) With the stronger tendency of back-vowel, the higher the elevation of the F$_1$ and F$_2$ frequencies. 4) The F$_3$ and F$_4$ showed no correaltion with the constriction area nor the position of tongue in the vocal tract 5) The parameter F$_2$- F$_1$, which is the difference between F$_2$ frequency and F$_1$ frequency showed an excellent indicator of differenciating the oral cavity articulated vowels from pharyngeal cavity articulated vowels. If the F$_2$-F$_1$ is less than about 600Hz which indicates the vowel is articulated in the pharyngeal cavity, and more than about 600Hz, which indicates that the vowel is articulated in the oral cavity.

  • PDF

An Effective Feature Extraction Method for Fault Diagnosis of Induction Motors (유도전동기의 고장 진단을 위한 효과적인 특징 추출 방법)

  • Nguyen, Hung N.;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.23-35
    • /
    • 2013
  • This paper proposes an effective technique that is used to automatically extract feature vectors from vibration signals for fault classification systems. Conventional mel-frequency cepstral coefficients (MFCCs) are sensitive to noise of vibration signals, degrading classification accuracy. To solve this problem, this paper proposes spectral envelope cepstral coefficients (SECC) analysis, where a 4-step filter bank based on spectral envelopes of vibration signals is used: (1) a linear predictive coding (LPC) algorithm is used to specify spectral envelopes of all faulty vibration signals, (2) all envelopes are averaged to get general spectral shape, (3) a gradient descent method is used to find extremes of the average envelope and its frequencies, (4) a non-overlapped filter is used to have centers calculated from distances between valley frequencies of the envelope. This 4-step filter bank is then used in cepstral coefficients computation to extract feature vectors. Finally, a multi-layer support vector machine (MLSVM) with various sigma values uses these special parameters to identify faulty types of induction motors. Experimental results indicate that the proposed extraction method outperforms other feature extraction algorithms, yielding more than about 99.65% of classification accuracy.

Audio Stream Delivery Using AMR(Adaptive Multi-Rate) Coder with Forward Error Correction in the Internet (인터넷 환경에서 FEC 기능이 추가된 AMR음성 부호화기를 이용한 오디오 스트림 전송)

  • 김은중;이인성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2027-2035
    • /
    • 2001
  • In this paper, we present an audio stream delivery using the AMR (Adaptive Multi-Rate) coder that was adopted by ETSI and 3GPP as a standard vocoder for next generation IMT-2000 service in which includes combined sender (FEC) and receiver reconstruction technique in the Internet. By use of the media-specific FEC scheme, the possibility to recover lost packets can be much increased due to the addition of repair data to a main data stream, by which the contents of lost packets can be recovered. The AMR codec is based on the code-excited linear predictive (CELP) coding model. So we use a frame erasure concealment for CELP-based coders. The proposed scheme is evaluated with ITU-T G.729 (CS-ACELP) coder and AMR - 12.2 kbit/s through the SNR (Signal to Noise Ratio) and the MOS (Mean Opinion Score) test. The proposed scheme provides 1.1 higher in Mean Opinion Score value and 5.61 dB higher than AMR - 12.2 kbit/s in terms of SNR in 10% packet loss, and maintains the communicab1e quality speech at frame erasure rates lop to 20%.

  • PDF