• Title/Summary/Keyword: Speech Signal

Search Result 1,174, Processing Time 0.027 seconds

Distribution of the Slopes of Autocovariances of Speech Signals in Frequency Bands (음성 신호의 주파수 대역별 자기 공분산 기울기 분포)

  • Kim, Seonil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.5
    • /
    • pp.1076-1082
    • /
    • 2013
  • The frequency bands were discovered which maximize the slopes of autocovariances of speech signals in frequency domain to increase the possibility of segregation between speech signals and background noise signal. A speech signal is divided into blocks which include multiples of sampled data, then those blocks are transformed to frequency domain using Fast Fourier Transform(FFT). To find linear equation by Linear Regression, the coefficients of autocovariance within blocks of some frequency band are used. The slope of the linear equation which is called the slope of autocovariance is varied from band to band according to the characteristics of the speech signal. Using speech signals of a man which consist of 200 files, the coefficients of the slopes of autocovariances are analyzed and compared from band to band.

Speaker Identification Based on Incremental Learning Neural Network

  • Heo, Kwang-Seung;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.76-82
    • /
    • 2005
  • Speech signal has various features of speakers. This feature is extracted from speech signal processing. The speaker is identified by the speaker identification system. In this paper, we propose the speaker identification system that uses the incremental learning based on neural network. Recorded speech signal through the microphone is blocked to the frame of 1024 speech samples. Energy is divided speech signal to voiced signal and unvoiced signal. The extracted 12 orders LPC cpestrum coefficients are used with input data for neural network. The speakers are identified with the speaker identification system using the neural network. The neural network has the structure of MLP which consists of 12 input nodes, 8 hidden nodes, and 4 output nodes. The number of output node means the identified speakers. The first output node is excited to the first speaker. Incremental learning begins when the new speaker is identified. Incremental learning is the learning algorithm that already learned weights are remembered and only the new weights that are created as adding new speaker are trained. It is learning algorithm that overcomes the fault of neural network. The neural network repeats the learning when the new speaker is entered to it. The architecture of neural network is extended with the number of speakers. Therefore, this system can learn without the restricted number of speakers.

Speech Reinforcement Based on G.729A Speech Codec Parameter Under Near-End Background Noise Environments (근단 배경 잡음 환경에서 G.729A 음성부호화기 파라미터에 기반한 새로운 음성 강화 기법)

  • Choi, Jae-Hun;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.392-400
    • /
    • 2009
  • In this paper, we propose an effective speech reinforcement technique base on ITU-T G.729A CS-ACELP codec under the near-end background noise environments. In general, since the intelligibility of the far-end speech for the near-end listener is significantly reduced under near-end noise environments, we require a far-end speech reinforcement approach to avoid this phenomena. In contrast to the conventional speech reinforcement algorithm, we reinforce the excitation signal of the codec's parameters received from the far-end speech signal based on the G.729A speech codec under various background noise environments. Specifically, we first estimate the excitation signal of ambient noise at the near-end through the encoder of the G.729A speech codec, reinforcing the excitation signal of the far-end speech transmitted from the far-end. we specially propose a novel approach to directly reinforce the excitation signal of far-end speech signal based on the decoder of the G.729A. The performance of the proposed algorithm is evaluated by the CCR (Comparison Category Rating) test of the method for subjective determination of transmission quality in ITU-T P.800 under various noise environments and shows better performances compared with conventional SNR Recovery methods.

A New Method for Segmenting Speech Signal by Frame Averaging Algorithm

  • Byambajav D.;Kang Chul-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.128-131
    • /
    • 2005
  • A new algorithm for speech signal segmentation is proposed. This algorithm is based on finding successive similar frames belonging to a segment and represents it by an average spectrum. The speech signal is a slowly time varying signal in the sense that, when examined over a sufficiently short period of time (between 10 and 100 ms), its characteristics are fairly stationary. Generally this approach is based on finding these fairly stationary periods. Advantages of the. algorithm are accurate border decision of segments and simple computation. The automatic segmentations using frame averaging show as much as $82.20\%$ coincided with manually verified segmentation of CMU ARCTIC corpus within time range 16 ms. More than $90\%$ segment boundaries are coincided within a range of 32 ms. Also it can be combined with many types of automatic segmentations (HMM based, acoustic cues or feature based etc.).

A Study on the Adaptive Method for Extracting Optimum Features of Speech Signal (음성신호의 최적특징을 적응적으로 추출하는 방법에 관한 연구)

  • 장승관;차태호;최웅세;김창석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.2
    • /
    • pp.373-380
    • /
    • 1994
  • In this paper, we proposed a method of extracting optimum features of speech signal to adjust signal level. For extracting features of speech signal we used FRLS(Fast Recursive Least Square) algorithm, we adjusted each frames of equal to constant level, and extracted optimum features of speech signal by using equalized autocorrelation function proposed in this paper.

  • PDF

Speech Recognition Performance Improvement using Gamma-tone Feature Extraction Acoustic Model (감마톤 특징 추출 음향 모델을 이용한 음성 인식 성능 향상)

  • Ahn, Chan-Shik;Choi, Ki-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.209-214
    • /
    • 2013
  • Improve the recognition performance of speech recognition systems as a method for recognizing human listening skills were incorporated into the system. In noisy environments by separating the speech signal and noise, select the desired speech signal. but In terms of practical performance of speech recognition systems are factors. According to recognized environmental changes due to noise speech detection is not accurate and learning model does not match. In this paper, to improve the speech recognition feature extraction using gamma tone and learning model using acoustic model was proposed. The proposed method the feature extraction using auditory scene analysis for human auditory perception was reflected In the process of learning models for recognition. For performance evaluation in noisy environments, -10dB, -5dB noise in the signal was performed to remove 3.12dB, 2.04dB SNR improvement in performance was confirmed.

Speech Encryption Scheme Using Frequency Band Scrambling (대역 스크램블을 이용한 음성 보호방식)

  • Ji, Hyung-Kun;Lee, Dong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 1999.11c
    • /
    • pp.700-702
    • /
    • 1999
  • The protection of data which we want to keep secret from invalid users has become a main topic nowadays. This paper introduces a encryption scheme for protecting speech signals from eavesdropping. The proposed encryption scheme adopts a secure voice cryptographic algorithm based on the scrambling in frequency band. In order to improve the conventional speech signal encryption scheme, we have randomly permuted DCT coefficients of speech signal. Simulation results are included to show the performance of the proposed algorithm for secure transmission of speech signals.

  • PDF

On a detecting the transition segments of speech signal by energ approximatio degree of the synchronized pitch (피치 동기된 에너지 유사도에 의한 음성신호의 전이구간 검출)

  • 김종득;박형빈;김대호;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.603-606
    • /
    • 1998
  • In a large number of words and the continued speech recognition system using a phoneme as teh recognition unit, it is necessary to segment processing. In this paper, a normalized AMDF new method. The suggested parameter represents a degree of sharpness at valley point. This method can detect the speech segment between the steady state and transient region to the continued speech without a prior information of speech signal.

  • PDF

Extraction of Speaker Recognition Parameter Using Chaos Dimension (카오스차원에 의한 화자식별 파라미터 추출)

  • Yoo, Byong-Wook;Kim, Chang-Seok
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.285-293
    • /
    • 1997
  • This paper was constructed to investigate strange attractor in considering speech which is regarded as chaos in that the random signal appears in the deterministic raising system. This paper searches for the delay time from AR model power spectrum for constructing fit attractor for speech signal. As a result of applying Taken's embedding theory to the delay time, an exact correlation dimension solution is obtained. As a result of this consideration of speech, it is found that it has more speaker recognition characteristic parameter, and gains a large speaker discrimination recognition rate.

  • PDF

Sub-Nyquist Nonuniform Sampling and Perfect Reconstruction of Speech Signals (음성신호의 Sub-Nyquist 비균일 표준화 및 완전 복구에 관한 연구)

  • Lee, He-Young
    • Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.153-170
    • /
    • 2005
  • The sub-Nyquist nonuniform sampling (SNNS) and the perfect reconstruction (PR) formula are proposed for the development of a systematic method to obtain minimal representation of a speech signal. In the proposed method, the instantaneous sampling frequency (ISF) varies, depending on the least upper boundary of spectral support of a speech signal in time-frequency domain (TFD). The definition of the instantaneous bandwidth (IB), which determines the ISF and is used for generating the set of samples that represent continuous-time signals perfectly, is given. Also, the spectral characteristics of the sampled data generated by the sub-Nyquist nonuniform sampling method is analyzed. The proposed method doesn't generate the redundant samples due to the time-varying property of the instantaneous bandwidth of a speech signal.

  • PDF