• Title/Summary/Keyword: Simulation speech

Search Result 299, Processing Time 0.028 seconds

A DCT Adaptive Subband Filter Algorithm Using Wavelet Transform (웨이브렛 변환을 이용한 DCT 적응 서브 밴드 필터 알고리즘)

  • Kim, Seon-Woong;Kim, Sung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.1
    • /
    • pp.46-53
    • /
    • 1996
  • Adaptive LMS algorithm has been used in many application areas due to its low complexity. In this paper input signal is transformed into the subbands with arbitrary bandwidth. In each subbands the dynamic range can be reduced, so that the independent filtering in each subbands has faster convergence rate than the full band system. The DCT transform domain LMS adaptive filtering has the whitening effect of input signal at each bands. This leads the convergence rate to very high speed owing to the decrease of eigen value spread Finally, the filtered signals in each subbands are synthesized for the output signal to have full frequency components. In this procedure wavelet filter bank guarantees the perfect reconstruction of signal without any interspectra interference. In simulation for the case of speech signal added additive white gaussian noise, the suggested algorithm shows better performance than that of conventional NLMS algorithm at high SNR.

  • PDF

Optimum Pattern Synthesis for a Microphone Array (마이크로폰 어레이를 위한 최적 패턴 형성)

  • Chang, Byoung-Kun;Kwon, Tae-Neung;Byun, Youn-Shik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.47-53
    • /
    • 1997
  • This paper concerns an efficient approach to forming a beam pattern of a microphone array to deal with broadband signals such as speech in a teleconference. A numerical method is proposed to find updated location of sidelobes for equalizaing the sidelobes via perturbation of array parameters such as array weight or microphone spacing. Thus the microphone array is optimized in a Dolph-Chebyshev sense such that directional or background noises incident in an array visual range are eliminated efficiently. It is shown that perturbation of microphone spacing yields an optimum pattern more appropriate for dealing with broadband signals than that of array weight. Also, a novel method is proposed to find a beam pattern which is robust with respect to sidelobe in a scanning situation. Computer simulation results are presented.

  • PDF

Acoustic Echo Cancellation Based on Convolutive Blind Signal Separation Method (Convolutive 암묵신호분리방법에 기반한 음향반향 제거)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.979-986
    • /
    • 2018
  • This paper deals with acoustic echo cancellation using blind signal separation method. This method does not degrade the echo cancellation performance even during double-talk. In the closed echo environment, the mixing model of acoustic signals is multi-channel, so the convolutive blind signal separation method is applied and the mixing coefficients are calculated by using the feedback model without directly calculating the separation coefficients for signal separation. The coefficient update is performed by iterative calculations based on the second-order statistical properties, thus estimates the near-end speech. A number of simulations have been performed to verify the performance of the proposed blind signal separation method. The simulation results show that the acoustic echo canceller using this method operates safely regardless of the presence of double-talk, and the PESQ is improved by 0.6 point compared with the general adaptive FIR filter structure.

Signal Enhancement of a Variable Rate Vocoder with a Hybrid domain SNR Estimator

  • Park, Hyung Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.962-977
    • /
    • 2019
  • The human voice is a convenient method of information transfer between different objects such as between men, men and machine, between machines. The development of information and communication technology, the voice has been able to transfer farther than before. The way to communicate, it is to convert the voice to another form, transmit it, and then reconvert it back to sound. In such a communication process, a vocoder is a method of converting and re-converting a voice and sound. The CELP (Code-Excited Linear Prediction) type vocoder, one of the voice codecs, is adapted as a standard codec since it provides high quality sound even though its transmission speed is relatively low. The EVRC (Enhanced Variable Rate CODEC) and QCELP (Qualcomm Code-Excited Linear Prediction), variable bit rate vocoders, are used for mobile phones in 3G environment. For the real-time implementation of a vocoder, the reduction of sound quality is a typical problem. To improve the sound quality, that is important to know the size and shape of noise. In the existing sound quality improvement method, the voice activated is detected or used, or statistical methods are used by the large mount of data. However, there is a disadvantage in that no noise can be detected, when there is a continuous signal or when a change in noise is large.This paper focused on finding a better way to decrease the reduction of sound quality in lower bit transmission environments. Based on simulation results, this study proposed a preprocessor application that estimates the SNR (Signal to Noise Ratio) using the spectral SNR estimation method. The SNR estimation method adopted the IMBE (Improved Multi-Band Excitation) instead of using the SNR, which is a continuous speech signal. Finally, this application improves the quality of the vocoder by enhancing sound quality adaptively.

Evaluation of a signal segregation by FDBM (FDBM의 음원분리 성능평가)

  • Lee, Chai-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1793-1802
    • /
    • 2013
  • Various approaches for sound source segregation have been proposed. Among these approaches, frequency domain binaural model(FDBM) has the advantages of low computational load and effective howling cancellation. A binaural hearing assistance system based on FDBM has been proposed. This system can enhance desired signal based on the directivity information. Although FDBM has been evaluated in terms of signal-to-noise ratio (SNR) and coherence function, the evaluation results do not always agree with the human impressions. These evaluation methods provide physical measures, and do not take account of perceptual aspect of human being. Considering a binaural hearing assistance system as a one of major applications, the quality of segregated sound should keep level enough. In the paper, signal segregation performance by means of FDBM is evaluated by three objective methods, i.e., SNR, coherence and Perceptual Evaluation of Speech Quality(PESQ), to discuss the characteristic of FDBM on the sound source segregation performance. The simulation's evaluation results show that FDBM improves the quality of the left and right channel signals to an equivalent level. And the results suggest the possibility that PESQ provides a more useful measure than SNR and coherence in terms of the segregation performance of FDBM. The evaluation results by PESQ show the effects from segregation parameters and indicate appropriate parameters under the conditions. In the paper, signal segregation performance by means of FDBM is evaluated by three objective methods, i.e., SNR, coherence and PESQ, to discuss the characteristic of FDBM on the sound source segregation performance. The simulation's evaluation results show that FDBM improves the quality of the left and right channel signals to an equivalent level. And the results suggest the possibility that PESQ provides a more useful measure than SNR and coherence in terms of the segregation performance of FDBM. The evaluation results by PESQ show the effects from segregation parameters and indicate appropriate parameters under the conditions.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on Transcranial Magnetic Electrode Simulation Using Maxwell 3D (Maxwell 3D를 이용한 경두개 자기 전극 시뮬레이션에 관한 연구)

  • Lee, Geun-Yong;Yoon, Se-Jin;Jeong, Jin-hyoung;Kim, Jun-Tae;Lee, Sang-sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.657-665
    • /
    • 2019
  • In this study, we conducted a study on the transcranial magnetic electrode, a method for the study of dementia and muscle pain, a neurodegenerative disease caused by an aging society, which is becoming a problem worldwide. In particular, transcranial magnetic electrodes have been studied to improve their ability to be deteriorated by dementia symptoms such as speech, cognitive ability, and memory by outputting magnetism deep into the brain using coils on the head epidermis. In this study, simulation was performed using Maxwell 3D program for the design of coil, the core of transcranial magnetic electrode. As a result of the simulation comparison between the coil designed by the previous research and the coil through the research and development, the output was found to be superior to the conventional designed coil. The graphs of the coil outputs of B-Field and H-Field are found to be symmetrical, but the symmetry between each coil is pseudo-symmetrical and not accurate. Based on these results, an experiment was conducted to confirm whether the output of the head epidermis through both coils is possible. In the magnitude field of the reverse-coil 2-coil analysis, the maximum output was 3.3920e + 004 H [A_per_meter], and the vector field showed the strongest magnetic field around 35 to 165 degrees. It was confirmed that the magnetic output canceled due to the magnetic output. In the case of the forward 2-coil, a maximum of 3.2348e + 004H [A_per_meter] similar to the reverse coil was observed, but in the case of the vector field, the magnetic output regarding the forward output and the head skin output was confirmed. However, when the height change in the output coil, the magnetic output was reduced.

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

Performance Analysis of a Statistical Packet Voice/Data Multiplexer (통계적 패킷 음성 / 데이터 다중화기의 성능 해석)

  • 신병철;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.3
    • /
    • pp.179-196
    • /
    • 1986
  • In this paper, the peformance of a statistical packet voice/data multiplexer is studied. In ths study we assume that in the packet voice/data multiplexer two separate finite queues are used for voice and data traffics, and that voice traffic gets priority over data. For the performance analysis we divide the output link of the multiplexer into a sequence of time slots. The voice signal is modeled as an (M+1) - state Markov process, M being the packet generation period in slots. As for the data traffic, it is modeled by a simple Poisson process. In our discrete time domain analysis, the queueing behavior of voice traffic is little affected by the data traffic since voice signal has priority over data. Therefore, we first analyze the queueing behavior of voice traffic, and then using the result, we study the queueing behavior of data traffic. For the packet voice multiplexer, both inpur state and voice buffer occupancy are formulated by a two-dimensional Markov chain. For the integrated voice/data multiplexer we use a three-dimensional Markov chain that represents the input voice state and the buffer occupancies of voice and data. With these models, the numerical results for the performance have been obtained by the Gauss-Seidel iteration method. The analytical results have been verified by computer simylation. From the results we have found that there exist tradeoffs among the number of voice users, output link capacity, voic queue size and overflow probability for the voice traffic, and also exist tradeoffs among traffic load, data queue size and oveflow probability for the data traffic. Also, there exists a tradeoff between the performance of voice and data traffics for given inpur traffics and link capacity. In addition, it has been found that the average queueing delay of data traffic is longer than the maximum buffer size, when the gain of time assignment speech interpolation(TASI) is more than two and the number of voice users is small.

  • PDF