• Title/Summary/Keyword: speech waveform

Search Result 135, Processing Time 0.02 seconds

An ACLMS-MPC Coding Method Integrated with ACFBD-MPC and LMS-MPC at 8kbps bit rate. (8kbps 비트율을 갖는 ACFBD-MPC와 LMS-MPC를 통합한 ACLMS-MPC 부호화 방식)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • This paper present an 8kbps ACLMS-MPC(Amplitude Compensation and Least Mean Square - Multi Pulse Coding) coding method integrated with ACFBD-MPC(Amplitude Compensation Frequency Band Division - Multi Pulse Coding) and LMS-MPC(Least Mean Square - Multi Pulse Coding) used V/UV/S(Voiced / Unvoiced / Silence) switching, compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. In integrating several methods, it is important to adjust the bit rate of voiced and unvoiced sound source to 8kbps while reducing the distortion of the speech waveform. In adjusting the bit rate of voiced and unvoiced sound source to 8 kbps, the speech waveform can be synthesized efficiently by restoring the individual pitch intervals using multi pulse in the representative interval. I was implemented that the ACLMS-MPC method and evaluate the SNR of APC-LMS in coding condition in 8kbps. As a result, SNR of ACLMS-MPC was 15.0dB for female voice and 14.3dB for male voice respectively. Therefore, I found that ACLMS-MPC was improved by 0.3dB~1.8dB for male voice and 0.3dB~1.6dB for female voice compared to existing MPC, ACFBD-MPC and LMS-MPC. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or internet phone. In the future, I will study the evaluation of the sound quality of 6.9kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source.

Design and Implementation of Salivary Electrical Stimulator for xerostomia

  • Lee, Jihyeon;Yeom, Hojun
    • International journal of advanced smart convergence
    • /
    • v.6 no.4
    • /
    • pp.19-25
    • /
    • 2017
  • After 40 years of age, the saliva glands are aged and the saliva is not made enough to cause xerostomia symptoms. Side effects such as hypertension medication or diuretics that the elderly take mainly can cause xerostomia syndrome. In addition, autoimmune diseases, diabetes, anemia, depression and other common diseases that cause xerostomia symptoms. If the saliva secretion is insufficient, tooth decay and gum disease are likely to occur, and the digestive ability of the saliva is also reduced due to the lack of amylase, which is a digestive element. Once the degenerated salivary gland is restored to its normal state, it is difficult to recover. In this paper, we give electrical stimulation to the masseter which is in contact with the large pituitary gland, and stimulate the salivary gland to the utmost by using speech recognition using words corresponding to oral gymnastics. Use the STM32F407VG to implement a system to relieve xerostomia.

Effect of Glottal Wave Shape on the Vowel Phoneme Synthesis (성문파형이 모음음소합성에 미치는 영향)

  • 안점영;김명기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.159-167
    • /
    • 1985
  • It was demonstrated that the glottal waves are different depending on a kind of vowels in deriving the glottal waves directly from Korean vowels/a, e, I, o, u/ w, ch are recorded by a male speaker. After resynthesizing vowels with five simulated glottal waves, the effects of glottal wave shape on the speech synthesis were compared with in terms of waveform. Some changes could be seen in the waveforms of the synthetic vowels with the variation of the shape, opening time and closing time, therefore it was confirmed that in the speech sysnthesis, the glottal wave shape is an important factor in the improvement of the speech quality.

  • PDF

Improvement of Bit Rate applying the Speaking Rate and PSOLA Technique of Speech in CELP Vocoder (음성신호의 발성율과 PSOLA기법을 적용한 음성 보코더 전송률 개선에 관한 연구)

  • 장경아;서지호;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.45-48
    • /
    • 2003
  • In general, speech coding methods are classified into the following three categories: the waveform coding, the source coding and the hybrid coding. Fast speaking is possible to encode with a few information compared with slow speaking rate. In case of speaking rate, low frequency band is more important than high frequency band while listening. Speech vocoding technique is developing to way with low bit rate and complexity and high sound quality. the CELP type of vocoder support very good sound quality with low bit rate but these vocoders don't consider about the speaking rate. When we consider speaking rate and encode the frame depending on the speaking rate, the bit rate is able to reduce the bit rate than the conventional vocoder. We propose the technique to estimate the speaking rate and applied PSOLA technique in case of the frame of slow speaking rate. As a result of simulation bit rate can be reduced about 300 bps.

  • PDF

Introduction to the Spectrum and Spectrogram (스팩트럼과 스팩트로그램의 이해)

  • Jin, Sung-Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.19 no.2
    • /
    • pp.101-106
    • /
    • 2008
  • The speech signal has been put into a form suitable for storage and analysis by computer, several different operation can be performed. Filtering, sampling and quantization are the basic operation in digiting a speech signal. The waveform can be displayed, measured and even edited, and spectra can be computed using methods such as the Fast Fourier Transform (FFT), Linear predictive Coding (LPC), Cepstrum and filtering. The digitized signal also can be used to generate spectrograms. The spectrograph provide major advantages to the study of speech. So, author introduces the basic techniques for the acoustic recording, digital signal processing and the principles of spectrum and spectrogram.

  • PDF

A Study on TSIUVC Approximate-Synthesis Method using Least Mean Square and Frequency Division (주파수 분할 및 최소 자승법을 이용한 TSIUVC 근사합성법에 관한 연구)

  • 이시우
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.462-468
    • /
    • 2003
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech quality in case coexist with a voiced and an unvoiced consonants in a frame. So, I propose TSIUVC(Transition Segment Including Unvoiced Consonant) searching and extraction method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This paper present a new method of TSIUVC approximate-synthesis by using Least Mean Square and frequency band division. As a result, this method obtain a high quality approximation-synthesis waveforms within TSIUVC by using frequency information of 0.547KHz below and 2.813KHz above. The important thing is that the maximum error signal can be made with low distortion approximation-synthesis waveform within TSIUVC. This method has the capability of being applied to a new speech coding of Voiced/Silence/TSIUVC, speech analysis and speech synthesis.

  • PDF

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System

  • Zang, Xian;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.37-39
    • /
    • 2009
  • In the research of speech recognition, locating the beginning and end of a speech utterance in a background of noise is of great importance. Since the background noise presenting to record will introduce disturbance while we just want to get the stationary parameters to represent the corresponding speech section, in particular, a major source of error in automatic recognition system of isolated words is the inaccurate detection of beginning and ending boundaries of test and reference templates, thus we must find potent method to remove the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two simple time-domain measurements - short-time energy, and short-time zero-crossing rate, which couldn't guarantee the precise results if in the low signal-to-noise ratio environments. This paper proposes a novel approach that finds the Lyapunov exponent of time-domain waveform. This proposed method has no use for obtaining the frequency-domain parameters for endpoint detection process, e.g. Mel-Scale Features, which have been introduced in other paper. Comparing with the conventional methods based on short-time energy and short-time zero-crossing rate, the novel approach based on time-domain Lyapunov Exponents(LEs) is low complexity and suitable for Digital Isolated Word Recognition System.

  • PDF

Efficient Tracking of Speech Formant Using Closed Phase WRLS-VFF-VT Algorithm

  • Lee, Kyo-Sik;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.2E
    • /
    • pp.8-13
    • /
    • 2000
  • In this paper, we present an adaptive formant tracking algorithm for speech using closed phase WRLS-VFF-VT method. The pitch synchronous closed phase methods is known to give more accurate estimates of the vocal tract parameters than the pitch asynchronous method. However the use of a pitch-synchronous closed phase analysis method has been limited due to difficulties associated with the task of accurately isolating the closed phase region in successive periods of speech. Therefore we have implemented the pitch synchronous closed phase WRLS-VFF-VT algorithm for speech analysis, especially for formant tracking. The proposed algorithm with the variable threshold(VT) can provide a superior performance in the boundary of phone and voiced/unvoiced sound. The proposed method is experimentally compared with the other method such as two channel CPC method by using synthetic waveform and real speech data. From the experimental results, we found that the block data processing techniques, such as the two-channel CPC, gave reasonable estimates of the formant/antiformant. However, the data windows used by these methods included the effects of the periodic excitation pulses, which affected the accuracy of the estimated formants. On the other hand the proposed WRLS-VFF-VT method, which eliminated the influence of the pulse excitation by using an input estimation as part of the algorithm, gave very accurate formant/bandwidth estimates and good spectral matching.

  • PDF

Development of Speech-Language Therapy Program kMIT for Aphasic Patients Following Brain Injury and Its Clinical Effects (뇌 손상 후 실어증 환자의 언어치료 프로그램 kMIT의 개발 및 임상적 효과)

  • Kim, Hyun-Gi;Kim, Yun-Hee;Ko, Myoung-Hwan;Park, Jong-Ho;Kim, Sun-Sook
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.237-252
    • /
    • 2002
  • MIT has been applied for nonfluent aphasic patients on the basis of lateralization of brain hemisphere. However, its applications for different languages have some inquiry for aphasic patients because of prosodic and rhythmic differences. The purpose of this study is to develop the Korean Melodic Intonation Therapy program using personal computer and its clinical effects for nonfluent aphasic patients. The algorithm was composed to voice analog signal, PCM, AMDF, Short-time autocorrelation function and center clipping. The main menu contains pitch, waveform, sound intensity and speech files on window. Aphasic patients' intonation patterns overlay on selected kMIT patterns. Three aphasic patients with or without kMIT training participated in this study. Four affirmative sentences and two interrogative sentences were uttered on CSL by stimulus of ST. VOT, VD, Hold and TD were measured on Spectrogram. In addition, articulation disorders and intonation patterns were evaluated objectively on spectrogram. The results indicated that nonfluent aphasic patients with kMIT training group showed some clinical effects of speech intelligibility based on VOT, TD values, articulation evaluation and prosodic pattern changes.

  • PDF

Implementation of Korean Vowel 'ㅏ' Recognition based on Common Feature Extraction of Waveform Sequence (파형 시퀀스의 공통 특징 추출 기반 모음 'ㅏ' 인식 구현)

  • Roh, Wonbin;Lee, Jongwoo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.567-572
    • /
    • 2014
  • In recent years, computing and networking technologies have been developed, and the communication equipments have become smaller and the mobility has increased. In addition, the demand for easily-operated speech recognition has increased. This paper proposes method of recognizing the Korean phoneme 'ㅏ'. A phoneme is the smallest unit of sound, and it plays a significant role in speech recognition. However, the precise recognition of the phonemes has many obstacles since it has many variations in its pronunciation. This paper proposes a simple and efficient method that can be used to recognize a Korean vowel 'ㅏ'. The proposed method is based on the common features that are extracted from the 'ㅏ' waveform sequences, and this is simpler than when using the previous complex methods. The experimental results indicate that this method has a more than 90 percent accuracy in recognizing 'ㅏ'.