• Title/Summary/Keyword: speech source

Search Result 281, Processing Time 0.025 seconds

Real-Time DSP Implementation of Adaptive Multi-Rate with TMS320C542 board (TMS320C542보드를 이용한 Adaptive Multi-Rate 음성부호화기의 실시간 구현)

  • 박세익;전라온;이인성
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.827-830
    • /
    • 2000
  • 3GPP and ETSI adopted AMR(Adaptive Multi-Rate) as a standard for next generation IMT-2000 service. In this paper, we analyzed algorithm about AMR and optimized ANSI C source on the C complier and assembly language of Texas Instrument . The implemented AMR speech codec requires 28.2MIPS of complexity for encoder and 5.5MIPS for decoder. we performed real-time implementation of AMR speech codec using 82% of TMS320C5402 with 40 MIPS specification. We give proof that the output speech of the implemented speech codec on DSP board is identical with result of C source program simulation. Also the reconstructed speech is verified in the real-time environment consisted of microphone and speaker.

  • PDF

Speech Enhancement for Voice commander in Car environment (차량환경에서 음성명령어기 사용을 위한 음성개선방법)

  • 백승권;한민수;남승현;이봉호;함영권
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2004
  • In this paper, we present a speech enhancement method as a pre-processor for voice commander under car environment. For the friendly and safe use of voice commander in a running car, non-stationary audio signals such as music and non-candidate speech should be reduced. Ow technique is a two microphone-based one. It consists of two parts Blind Source Separation (BSS) and Kalman filtering. Firstly, BSS is operated as a spatial filter to deal with non-stationary signals and then car noise is reduced by kalman filtering as a temporal filter. Algorithm Performance is tested for speech recognition. And the results show that our two microphone-based technique can be a good candidate to a voice commander.

The interlanguage Speech Intelligibility Benefit for Korean Learners of English: Production of English Front Vowels

  • Han, Jeong-Im;Choi, Tae-Hwan;Lim, In-Jae;Lee, Joo-Kyeong
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.53-61
    • /
    • 2011
  • The present work is a follow-up study to that of Han, Choi, Lim and Lee (2011), where an asymmetry in the source segments eliciting the interlanguage speech intelligibility benefit (ISIB) was found such that the vowels which did not match any vowel of the Korean language were likely to elicit more ISIB than matched vowels. In order to identify the source of the stronger ISIB in non-matched vowels, acoustic analyses of the stimuli were performed. Two pairs of English front vowels [i] vs. [I], and $[{\varepsilon}]$ vs. $[{\ae}]$ were recorded by English native talkers and two groups of Korean learners according to their English proficiency, and then their vowel duration and the frequencies of the first two formants (F1, F2) were measured. The results demonstrated that the non-matched vowels such as [I], and $[{\ae}]$ produced by Korean talkers seemed to show more deviated acoustic characteristics from those of the natives, with longer duration and with closer formant values to the matched vowels, [i] and $[{\varepsilon}]$, than those of the English natives. Combining the results of acoustic measurements in the present study and those of word identification in Han et al. (2011), we suggest that relatively better performance in word identification by Korean talkers/listeners than the native English talkers/listeners is associated with the shared interlanguage of Korean talkers and listeners.

  • PDF

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

Study on optimal number of latent source in speech enhancement based Bayesian nonnegative matrix factorization (베이지안 비음수 행렬 인수분해 기반의 음성 강화 기법에서 최적의 latent source 개수에 대한 연구)

  • Lee, Hye In;Seo, Ji Hun;Lee, Young Han;Kim, Je Woo;Lee, Seok Pil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.418-420
    • /
    • 2015
  • 본 논문은 베이지안 비음수 행렬 인수분해 (Bayesian nonnegative matrix factorization, BNMF) 기반의 음성 강화 기법에서 음성과 잡음 성분의 latent source 수에 따른 강화성능에 대해 서술한다. BNMF 기반의 음성 강화 기법은 입력 신호를 서브 신호들의 합으로 분해한 후, 잡음 성분을 제거하는 방식으로 그 성능이 기존의 NMF 기반의 방법들보다 우수한 것으로 알려져 있다. 그러나 많은 계산량과 latent source 의 수에 따라 성능의 차이가 있다는 단점이 있다. 이러한 단점을 개선하기 위해 본 논문에서는 BNMF 기반의 음성 강화 기법에서 최적의 latent source 개수를 찾기 위한 실험을 진행하였다. 실험은 잡음의 종류, 음성의 종류, 음성과 잡음의 latent source 의 개수, 그리고 SNR 을 바꿔가며 진행하였고, 성능 평가 방법으로 PESQ (perceptual evaluation of speech quality) 를 이용하였다. 실험 결과, 음성의 latent source 개수는 성능에 영향을 주지 않지만, 잡음의 latent source 개수는 많을수록 성능이 좋은 것으로 확인되었다.

  • PDF

A study on sound source segregation of frequency domain binaural model with reflection (반사음이 존재하는 양귀 모델의 음원분리에 관한 연구)

  • Lee, Chai-Bong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.3
    • /
    • pp.91-96
    • /
    • 2014
  • For Sound source direction and separation method, Frequency Domain Binaural Model(FDBM) shows low computational cost and high performance for sound source separation. This method performs sound source orientation and separation by obtaining the Interaural Phase Difference(IPD) and Interaural Level Difference(ILD) in frequency domain. But the problem of reflection occurs in practical environment. To reduce this reflection, a method to simulate the sound localization of a direct sound, to detect the initial arriving sound, to check the direction of the sound, and to separate the sound is presented. Simulation results show that the direction is estimated to lie close within 10% from the sound source and, in the presence of the reflection, the level of the separation of the sound source is improved by higher Coherence and PESQ(Perceptual Evaluation of Speech Quality) and by lower directional damping than those of the existing FDBM. In case of no reflection, the degree of separation was low.

Spatial Speaker Localization for a Humanoid Robot Using TDOA-based Feature Matrix (도착시간지연 특성행렬을 이용한 휴머노이드 로봇의 공간 화자 위치측정)

  • Kim, Jin-Sung;Kim, Ui-Hyun;Kim, Do-Ik;You, Bum-Jae
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.237-244
    • /
    • 2008
  • Nowadays, research on human-robot interaction has been getting increasing attention. In the research field of human-robot interaction, speech signal processing in particular is the source of much interest. In this paper, we report a speaker localization system with six microphones for a humanoid robot called MAHRU from KIST and propose a time delay of arrival (TDOA)-based feature matrix with its algorithm based on the minimum sum of absolute errors (MSAE) for sound source localization. The TDOA-based feature matrix is defined as a simple database matrix calculated from pairs of microphones installed on a humanoid robot. The proposed method, using the TDOA-based feature matrix and its algorithm based on MSAE, effortlessly localizes a sound source without any requirement for calculating approximate nonlinear equations. To verify the solid performance of our speaker localization system for a humanoid robot, we present various experimental results for the speech sources at all directions within 5 m distance and the height divided into three parts.

  • PDF

Measurement and evaluation of speech privacy in university office rooms (대학 내 사무실의 스피치 프라이버시 측정 및 평가)

  • Lim, Jae-Seop;Choi, Young-Ji
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.396-405
    • /
    • 2019
  • The speech privacy of closed office rooms located in a university campus was measured and assessed in terms of SPC (Speech Privacy Class) values. The measurements of two quantities, the LD (Level Difference) between a source and a receiving room, and the background noise level ($L_b$) at the receiving room were carried out in 5 rooms located in 3 different buildings in the university campus. Each of the 5 rooms was adjacent to both offices and corridors through walls. The TL (Transmission Loss) between the source and the receiver room was also measured to compare the difference of two standard methods, ASTM E2836-10 and KS F 2809. The present results show that the speech privacy of the 5 office rooms is not met the requirement for a minimum SPC values of 70. A minimum LD value of 41 dB between the source and the receiver room should be achieved for having a SPC value of 70 when the mean measured value of $L_b$ at the receiving room is 29.2 dB. That is, the TL(avg) value averaged over the octave bands from 160 Hz to 5000 Hz between the source and the receiver room should be or greater than 40 dB. The most important architectural factor influencing the LD value is the presence of openings, such as doors, and windows, on the adjacent walls between the source and receiving room. Therefore, if the opening of the adjacent wall is replaced by an opening with high sound insulation, the appropriate SPC value of the research and office rooms can be achieved.

A New Speech Enhancement Method Using Adaptive Digital Filter (적응디지털필터를 사용한 음질향상 방법)

  • 임용훈;김완구;차일환;윤대희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.10
    • /
    • pp.35-41
    • /
    • 1993
  • In this paper, a new speech enhancement method for speech signal corrupted by environmental noise is proposed. Two signals are obtained from the microphone and from the accelerometer attached to the neck, respectively. Since two signals are generated from same source signal, both signals are closely correlated. And environmental noise has no effect on the accelerometer signal. The speech enhancement system identifies the optimum linear system between two signals on the basis of the dependence between the signals. The enhanced speech can be obtained by filtering the noise-free accelerometer signal. Since the characteristcs of the speech signal and environmental noise are changing with time, adaptive filtering system has to be used for characterizing the time-varing system. Simulation results show 7dB enhancement with 0dB speech signal level relative to the white noise.

  • PDF

Real-Time Implementation of AMR Speech Codec Using TMS320VC5510 DSP (TMS320VC5510 DSP를 이용한 AMR 음성부호화기의 실시간 구현)

  • Kim, Jun;Bae, Keun-Sung
    • MALSORI
    • /
    • no.65
    • /
    • pp.143-152
    • /
    • 2008
  • This paper focuses on the real time implementation of an adaptive multi-rate (AMR) speech codec, that is a standard speech codec of IMT-2000, using the TMS320VC5510. The series of TMS320VC55x is a 16-bit fixed-point digital signal processor (DSP) having low power consumption for the use of mobile communications by Texas Instruments (TI) corporation. After we analyze the AMR algorithm and source code as well as the structure and I/O of 7MS320VC55x, we carry out optimizing the programs for real time implementation. The implemented AMR speech codec uses 55.2 kbyte for the program memory and 98.3 kbyte for the data memory, and it requires 709,878 clocks, i.e. about 3.5 ms, for processing a frame of 20 ms speech signal.

  • PDF