• Title/Summary/Keyword: 음성기반

Search Result 2,243, Processing Time 0.03 seconds

Effect of Energy Normalization on the Quality of Synthetic Speech (음성합성시 에너지 정규화가 음질에 미치는 영향)

  • 정은석;최의선;이철희
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06a
    • /
    • pp.95-98
    • /
    • 1998
  • 본 논문에서는 코퍼스 기반 음성합성시 각 음성 세그머트의 에너지 정규화가 합성된 음성의 음질에 미치는 영향에 대하여 연구한다. 음성합성에 사용되는 음성 세그먼트를 실제 자연 음성 데이터로부터 추출된 것으로 다양한 발음세기를 가진다. 따라서 이들을 조합하여 만든 합성음성의 음질은 일반적으로 음량이 고르지 못하고 듣기에 부자연스럽다. 이러한 문제를 해결하기 위해 음성합성시 음성 세그먼트의 에너지를 정규화하는 방법을 제안하고 정규화방법으로 최대진폭 정규화방식을 사용하였다. 녹음환경이 비교적 일정한 코퍼스와 그렇지 않은 환경에서 녹음된 코퍼스를 사용하여 정규화 없이 합성한 음성의 음질과 정규화를 거쳐서 합성한 음성의 음질을 비교한다. 실험결과 음성 세그먼트의 에너지를 정규화한 경우 합성음성의 음질이 개선되었다.

  • PDF

A study on Gabor Filter Bank-based Feature Extraction Algorithm for Analysis of Acoustic data of Emergency Rescue (응급구조 음향데이터 분석을 위한 Gabor 필터뱅크 기반의 특징추출 알고리즘에 대한 연구)

  • Hwang, Inyoung;Chang, Joon-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1345-1347
    • /
    • 2015
  • 본 논문에서는 응급상황이 신고되는 상황에서 수보자에게 전달되는 신고자의 주변음향신호로부터 신고자의 주변상황을 추정하기 위하여 음향의 주파수적 특성 및 변화특성의 모델링 성능이 뛰어난 Gabor 필터뱅크 기반의 특징벡터 추출 기술 및 분류 성능이 뛰어난 심화신경망을 도입한다. 제안하는 Gabor 필터뱅크 기반의 특징벡터 추출 기법은 비음성 구간 검출기를 통하여 음성/비음성을 구분한 후에 비음성 구간에서 23차의 Mel-filter bank 계수를 추출한 후에 이로부터 Gabor 필터를 이용하여 주변상황 추정을 위한 특징벡터를 추출하고, 이로부터 학습된 심화신경망을 통하여 신고자의 장소적 정보를 추정한다. 제안된 기법은 여러 가지 시나리오 환경에서 평가되었으며, 우수한 분류성능을 보였다.

Voice Activity Detection Based on Entropy in Noisy Car Environment (차량 잡음 환경에서 엔트로피 기반의 음성 구간 검출)

  • Roh, Yong-Wan;Lee, Kue-Bum;Lee, Woo-Seok;Hong, Kwang-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.2
    • /
    • pp.121-128
    • /
    • 2008
  • Accurate voice activity detection have a great impact on performance of speech applications including speech recognition, speech coding, and speech communication. In this paper, we propose methods for voice activity detection that can adapt to various car noise situations during driving. Existing voice activity detection used various method such as time energy, frequency energy, zero crossing rate, and spectral entropy that have a weak point of rapid. decline performance in noisy environments. In this paper, the approach is based on existing spectral entropy for VAD that we propose voice activity detection method using MFB(Met-frequency filter banks) spectral entropy, gradient FFT(Fast Fourier Transform) spectral entropy. and gradient MFB spectral entropy. FFT multiplied by Mel-scale is MFB and Mel-scale is non linear scale when human sound perception reflects characteristic of speech. Proposed MFB spectral entropy method clearly improve the ability to discriminate between speech and non-speech for various in noisy car environments that achieves 93.21% accuracy as a result of experiments. Compared to the spectral entropy method, the proposed voice activity detection gives an average improvement in the correct detection rate of more than 3.2%.

  • PDF

A Study on Speech Synthesizer Using Distributed System (분산형 시스템을 적용한 음성합성에 관한 연구)

  • Kim, Jin-Woo;Min, So-Yeon;Na, Deok-Su;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.209-215
    • /
    • 2010
  • Recently portable terminal is received attention by wireless networks and mass capacity ROM. In this result, TTS(Text to Speech) system is inserted to portable terminal. Nevertheless high quality synthesis is difficult in portable terminal, users need high quality synthesis. In this paper, we proposed Distributed TTS (DTTS) that was composed of server and terminal. The DTTS on corpus based speech synthesis can be high quality synthesis. Synthesis system in server that generate optimized speech concatenation information after database search and transmit terminal. Synthesis system in terminal make high quality speech synthesis as low computation using transmitted speech concatenation information from server. The proposed method that can be reducing complexity, smaller power consumption and efficient maintenance.

Machine Learning based Speech Disorder Detection System (기계학습 기반의 장애 음성 검출 시스템)

  • Jung, Junyoung;Kim, Gibak
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.253-256
    • /
    • 2017
  • This paper deals with the implementation of speech disorder detection system based on machine learning classification. Problems with speech are a common early symptom of a stroke or other brain injuries. Therefore, detection of speech disorder may lead to correction and fast medical treatment of strokes or cerebrovascular accidents. The speech disorder system can be implemented by extracting features from the input speech and classifying the features using machine learning algorithms. Ten machine learning algorithms with various scaling methods were used to discriminate speech disorder from normal speech. The detection system was evaluated by the TORGO database which contains dysarthric speech collected from speakers with either cerebral palsy or amyotrophic lateral sclerosis.

Nasal Place Detection with Acoustic Phonetic Parameters (음향음성학 파라미터를 사용한 비음 위치 검출)

  • Lee, Suk-Myung;Choi, Jeung-Yoon;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.6
    • /
    • pp.353-358
    • /
    • 2012
  • This paper describes acoustic phonetic parameters for detecting nasal place in a knowledge-based speech recognition system. Initial acoustic phonetic parameters are selected by studying nasal production mechanisms which are radiation of the sound through the nasal cavity. Nasals are produced with differing articulatory configuration which can be classified by measuring acoustic phonetic parameters such as band energy ratio, band energy differences, formants and formant differences. These acoustic phonetic parameters were tested in a classification experiment among labial nasal, alveolar nasal and velar nasal. An overall classification rate of 57.5% is obtained using the proposed acoustic phonetic parameters on the TIMIT database.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Voice Activity Detection Based on Discriminative Weight Training with Feedback (궤환구조를 가지는 변별적 가중치 학습에 기반한 음성검출기)

  • Kang, Sang-Ick;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.8
    • /
    • pp.443-449
    • /
    • 2008
  • One of the key issues in practical speech processing is to achieve robust Voice Activity Deteciton (VAD) against the background noise. Most of the statistical model-based approaches have tried to employ equally weighted likelihood ratios (LRs), which, however, deviates from the real observation. Furthermore voice activities in the adjacent frames have strong correlation. In other words, the current frame is highly correlated with previous frame. In this paper, we propose the effective VAD approach based on a minimum classification error (MCE) method which is different from the previous works in that different weights are assigned to both the likelihood ratio on the current frame and the decision statistics of the previous frame.

A Probabilistic Combination Method of Minimum Statistics and Soft Decision for Robust Noise Power Estimation in Speech Enhancement (강인한 음성향상을 위한 Minimum Statistics와 Soft Decision의 확률적 결합의 새로운 잡음전력 추정기법)

  • Park, Yun-Sik;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.153-158
    • /
    • 2007
  • This paper presents a new approach to noise estimation to improve speech enhancement in non-stationary noisy environments. The proposed method combines the two separate noise power estimates provided by the minimum statistics (MS) for speech presence and soft decision (SD) for speech absence in accordance with SAP (Speech Absence Probability) on a separate frequency bin. The performance of the proposed algorithm is evaluated by the subjective test under various noise environments and yields better results compared with the conventional MS or SD-based schemes.

Parallel Speech Recognition on Distributed Memory Multiprocessors (분산 메모리 다중 프로세서 상에서의 병렬 음성인식)

  • 윤지현;홍성태;정상화;김형순
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.747-749
    • /
    • 1998
  • 본 논문에서는 음성과 자연언어의 통합처리를 위한 효과적인 병렬 계산 모델을 제안한다. 음소모델은 continuous HMM에 기반을 둔 문맥종속형 음소를 사용하며, 언어모델은 knowledge-based approach를 사용한다. 또한 계층구조의 지식베이스상에서 다수의 가설을 처리하기 위해 memory-based parsing기술을 사용하였다. 본 연구의 병렬 음성인식 알고리즘은 분산메모리 MIMD 구조의 다중 Transputer 시스템을 이용하여 구현되었다. 실험을 통하여 음성인식 과정에서 발생하는 speech-specific problem의 해를 제공하고 음성인식 시스템의 병렬화를 통하여 실시간 음성인식의 가능성을 보여준다.

  • PDF