• Title/Summary/Keyword: Speech signal analysis

Search Result 275, Processing Time 0.029 seconds

Overlapped Subband-Based Independent Vector Analysis

  • Jang, Gil-Jin;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.30-34
    • /
    • 2008
  • An improvement to the existing blind signal separation (BSS) method has been made in this paper. The proposed method models the inherent signal dependency observed in acoustic object to separate the real-world convolutive sound mixtures. The frequency domain approach requires solving the well known permutation problem, and the problem had been successfully solved by a vector representation of the sources whose multidimensional joint densities have a certain amount of dependency expressed by non-spherical distributions. Especially for speech signals, we observe strong dependencies across neighboring frequency bins and the decrease of those dependencies as the bins become far apart. The non-spherical joint density model proposed in this paper reflects this property of real-world speech signals. Experimental results show the improved performances over the spherical joint density representations.

Influence Analysis of Food on Body Organs by Applying Speech Signal Processing Techniques (음성신호처리 기술을 적용한 음식물이 인체 장기에 미치는 영향 분석)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.5A
    • /
    • pp.388-394
    • /
    • 2012
  • In this paper, the influence analysis of food on human body organs is proposed by applying speech signal processing techniques. Until these days, most of researches regarding the influence of food on body organs are such that "A" ingredient of food may produce a good effect on "B" organ. However, the numerical and quantified researches regarding these effects hardly have been performed. This paper therefore proposes a method to quantify the effects by using numerical data, so as to retrieve new facts and informations. Especially, this paper investigates the effect of tomatoes on human heart function. The experiment collects samples of voice signals, before and after 5 minutes, 30 minutes and 1 hour, from 15 males in their 20s who have not abnormal heart function; the voice signal components are applied to measure changes of heart conditions to digitize and quantify the effects of tomatoes on cardiac function.

A Study on Speech Recognition using Vocal Tract Area Function (성도 면적 함수를 이용한 음성 인식에 관한 연구)

  • 송제혁;김동준
    • Journal of Biomedical Engineering Research
    • /
    • v.16 no.3
    • /
    • pp.345-352
    • /
    • 1995
  • The LPC cepstrum coefficients, which are an acoustic features of speech signal, have been widely used as the feature parameter for various speech recognition systems and showed good performance. The vocal tract area function is a kind of articulatory feature, which is related with the physiological mechanism of speech production. This paper proposes the vocal tract area function as an alternative feature parameter for speech recognition. The linear predictive analysis using Burg algorithm and the vector quantization are performed. Then, recognition experiments for 5 Korean vowels and 10 digits are executed using the conventional LPC cepstrum coefficients and the vocal tract area function. The recognitions using the area function showed the slightly better results than those using the conventional LPC cepstrum coefficients.

  • PDF

Consecutive Vowel Segmentation of Korean Speech Signal using Phonetic-Acoustic Transition Pattern (음소 음향학적 변화 패턴을 이용한 한국어 음성신호의 연속 모음 분할)

  • Park, Chang-Mok;Wang, Gi-Nam
    • Annual Conference of KIPS
    • /
    • 2001.10a
    • /
    • pp.801-804
    • /
    • 2001
  • This article is concerned with automatic segmentation of two adjacent vowels for speech signals. All kinds of transition case of adjacent vowels can be characterized by spectrogram. Firstly the voiced-speech is extracted by the histogram analysis of vowel indicator which consists of wavelet low pass components. Secondly given phonetic transcription and transition pattern spectrogram, the voiced-speech portion which has consecutive vowels automatically segmented by the template matching. The cross-correlation function is adapted as a template matching method and the modified correlation coefficient is calculated for all frames. The largest value on the modified correlation coefficient series indicates the boundary of two consecutive vowel sounds. The experiment is performed for 154 vowel transition sets. The 154 spectrogram templates are gathered from 154 words(PRW Speech DB) and the 161 test words(PBW Speech DB) which are uttered by 5 speakers were tested. The experimental result shows the validity of the method.

  • PDF

An Interdisciplinary Study of A Leaders' Voice Characteristics: Acoustical Analysis and Members' Cognition

  • Hahm, SangWoo;Park, Hyungwoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4849-4865
    • /
    • 2020
  • The traditional roles of leaders are to influence members and motivate them to achieve shared goals in organizations. However, leaders such as top managers and chief executive officers, in practice, do not always directly meet or influence other company members. In fact, they tend to have the greatest impact on their members through formal speeches, company procedures, and the like. As such, official speech is directly related to the motivation of company employees. In an official speech, not only the contents of the speech, but also the voice characteristics of the speaker have an important influence on listeners, as the different vocal characteristics of a person can have different effects on the listener. Therefore, according to the voice characteristics of a leader, the cognition of the members may change, and, the degree to which the members are influenced and motivated will be different. This study identifies how members may perceive a speech differently according to the different voice characteristics of leaders in formal speeches. Further, different perceptions about voices will influence members' cognition of the leader, for example, in how trustworthy they appear. The study analyzed recorded speeches of leaders, and extracted features of their speaking style through digital speech signal analysis. Then, parameters were extracted and analyzed by the time domain, frequency domain, and spectrogram domain methods. We also analyzed the parameters for use in Natural Language Processing. We investigated which leader's voice characteristics had more influence on members or were more effective on them. A person's voice characteristics can be changed. Therefore, leaders who seek to influence members in formal speeches should have effective voice characteristics to motivate followers.

Speech Transition Detection and approximate-synthesis Method for Speech Signal Compression and Recovery (음성신호 압축 및 복원을 위한 음성 천이구간 검출과 근사합성 방식)

  • Lee, Kwang-Seok;Kim, Bong-Gi;Kang, Seong-Soo;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.763-767
    • /
    • 2008
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech qualify in case coexist with a voiced and an unvoiced consonants in a frame. So, We proposed TS(Transition Segment) including unvoiced consonant searching and extraction method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This research present a new method of TS approximate-synthesis by using Least Mean Square and frequency band division. As a result, this method obtain a high quality approximation-synthesis waveforms within TS by using frequency information of 0.547kHz below and 2.813kHz above. The important thing is that the maximum error signal can be made with low distortion approximation-synthesis waveform within TS. This method has the capability of being applied to a new speech coding of Voiced/Silence/TS, speech analysis and speech synthesis.

  • PDF

On a Study of Detecting First Formant Using Autocorrelation Method (자기상관법을 이용한 제 1 포만트 검출법에 관한 연구)

  • 강은영;민소연;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.285-288
    • /
    • 2001
  • In the speech analysis, to estimate formant center frequencies exactly is very important. If we know formant frequencies, we can expect which pronunciation is uttered. Generally, the magnitude of first formant frequency in voiced speech is 10dB more than other formant frequency. So, the shape of voice signal in time domain is affected by mainly first formant. Therefore we can get first formant frequency roughly by using ZCR(Zero Cross Rate). In this paper, we proposed the improvement method to get first formant frequency by using ZCR. We did autocorrelation before getting ZCR. This procedure makes voice signal smooth so, first formant in voice signal is emphasized. As a result of this method, we got more exact ZCR and first formant frequency. Conventional method of formant estimate is done in frequency domain but proposed method is done in time domain. So, this is very simple.

  • PDF

Implementation of Variable Threshold Dual Rate ADPCM Speech CODEC Considering the Background Noise (배경잡음을 고려한 가변임계값 Dual Rate ADPCM 음성 CODEC 구현)

  • Yang, Jae-Seok;Han, Kyong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3166-3168
    • /
    • 2000
  • This paper proposed variable threshold dual rate ADPCM coding method which is modified from the standard ADPCM of ITU G.726 for speech quality improvement. The speech quality of variable threshold dual rate ADPCM is better than single rate ADPCM at noisy environment without increasing the complexity by using ZCR(Zero Crossing Rate). In this case, ZCR is used to divide input signal samples into two categories(noisy & speech). The samples with higher ZCR is categorized as the noisy region and the samples with lower ZCR is categorized as the speech region. Noisy region uses higher threshold value to be compressed by 16Kbps for reduced bit rates and the speech region uses lower threshold value to be compressed by 40Kbps for improved speech quality. Comparing with the conventional ADPCM, which adapts the fixed coding rate. the proposed variable threshold dual rate ADPCM coding method improves noise character without increasing the bit rate. For real time applications, ZCR calculation was considered as a simple method to obtain the background noise information for preprocess of speech analysis such as FFT and the experiment showed that the simple calculation of ZCR can be used without complexity increase. Dual rate ADPCM can decrease the amount of transferred data efficiently without increasing complexity nor reducing speech quality. Therefore result of this paper can be applied for real-time speech application such as the internet phone or VoIP.

  • PDF

Design and Implementation of Simple Text-to-Speech System using Phoneme Units (음소단위를 이용한 소규모 문자-음성 변환 시스템의 설계 및 구현)

  • Park, Ae-Hee;Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.49-60
    • /
    • 1995
  • This paper is a study on the design and implementation of the Korean Text-to-Speech system which is used for a small and simple system. In this paper, a parameter synthesis method is chosen for speech syntheiss method, we use PARCOR(PARtial autoCORrelation) coefficient which is one of the LPC analysis. And we use phoneme for synthesis unit which is the basic unit for speech synthesis. We use PARCOR, pitch, amplitude as synthesis parameter of voice, we use residual signal, PARCOR coefficients as synthesis parameter of unvoice. In this paper, we could obtain the 60% intelligibility by using the residual signal as excitation signal of unvoiced sound. The result of synthesis experiment, synthesis of a word unit is available. The controlling of phoneme duration is necessary for synthesizing of a sentence unit. For setting up the synthesis system, PC 486, a 70[Hz]-4.5[KHz] band pass filter for speech input/output, amplifier, and TMS320C30 DSP board was used.

  • PDF

The Flattening Algorithm of Speech Spectrum by Quadrature Mirror Filter (QMF에 의한 음성스펙트럼의 평탄화 알고리즘)

  • Min, So-Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.5
    • /
    • pp.907-912
    • /
    • 2006
  • Pre-emphasizing the speech compensates for falloff at high frequencies. The most common form of pre-emphasis is y(n)=s(n)-A${\cdot}$s(n-1), where A typically lies between 0.9 and 1.0 in voiced signal. And, this value reflects the degree of pre-emphasis and equals R(1)/R(0) in conventional method. This paper proposes a new flattening method to compensate the weaked high frequency components that occur by vocal cord characteristic. We used QMF(Quardrature Mirror Filter) to minimize the output signal distortion. After using the QMF to compensate high frequency components, flattening process is followed by R(1)/R(0) at each frame. Experimental results show that the proposed method flattened the weaked high frequency components effectively than auto correlation method. Therefore, the flattening algorithm will apply in speech signal processing like speech recognition, speech analysis and synthesis.

  • PDF