• Title/Summary/Keyword: 음성기반

Search Result 2,243, Processing Time 0.031 seconds

Voice Activity Detection in Noisy Environment based on Statistical Nonlinear Dimension Reduction Techniques (통계적 비선형 차원축소기법에 기반한 잡음 환경에서의 음성구간검출)

  • Han Hag-Yong;Lee Kwang-Seok;Go Si-Yong;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.5
    • /
    • pp.986-994
    • /
    • 2005
  • This Paper proposes the likelihood-based nonlinear dimension reduction method of the speech feature parameters in order to construct the voice activity detecter adaptable in noisy environment. The proposed method uses the nonlinear values of the Gaussian probability density function with the new parameters for the speec/nonspeech class. We adapted Likelihood Ratio Test to find speech part and compared its performance with that of Linear Discriminant Analysis technique. In experiments we found that the proposed method has the similar results to that of Gaussian Mixture Models.

Study of Speech Recognition System Operation for Voice-driven UAV Control (음성 기반 무인 항공기 제어를 위한 음성인식 시스템 운용 체계 연구)

  • Park, Jeong-Sik
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.3
    • /
    • pp.212-219
    • /
    • 2019
  • As unmanned aerial vehicle (UAV) has been utilized for military operation, efficient ways for controlling UAV has been necessary. In particular, instead of conventional approach using console control, speech recognition based UAV control is essential for military environments in which rapid command operation is required. But research on this novel approach is not actively studied yet. In this study, we introduce efficient ways of speech recognition system operation for voice-driven UAV control, focusing on mission command control from manned aircraft rather than ground control center. We propose an efficient way of system operation for UAV control in cooperation of aircraft and UAV, and verify its efficiency via speech recognition experiment.

Corpus-based Korean Text-to-speech Conversion System (콜퍼스에 기반한 한국어 문장/음성변환 시스템)

  • Kim, Sang-hun; Park, Jun;Lee, Young-jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.24-33
    • /
    • 2001
  • this paper describes a baseline for an implementation of a corpus-based Korean TTS system. The conventional TTS systems using small-sized speech still generate machine-like synthetic speech. To overcome this problem we introduce the corpus-based TTS system which enables to generate natural synthetic speech without prosodic modifications. The corpus should be composed of a natural prosody of source speech and multiple instances of synthesis units. To make a phone level synthesis unit, we train a speech recognizer with the target speech, and then perform an automatic phoneme segmentation. We also detect the fine pitch period using Laryngo graph signals, which is used for prosodic feature extraction. For break strength allocation, 4 levels of break indices are decided as pause length and also attached to phones to reflect prosodic variations in phrase boundaries. To predict the break strength on texts, we utilize the statistical information of POS (Part-of-Speech) sequences. The best triphone sequences are selected by Viterbi search considering the minimization of accumulative Euclidean distance of concatenating distortion. To get high quality synthesis speech applicable to commercial purpose, we introduce a domain specific database. By adding domain specific database to general domain database, we can greatly improve the quality of synthetic speech on specific domain. From the subjective evaluation, the new Korean corpus-based TTS system shows better naturalness than the conventional demisyllable-based one.

  • PDF

Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech (연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성)

  • 김재홍;조관선;이철희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.10-16
    • /
    • 1999
  • In this paper, we propose a concatenation-based speech synthesizer using CVC(consonant-vowel-consonant) speech segments extracted from an undesigned continuous speech corpus. Natural synthetic speech can be generated by a proper modelling of coarticulation effects between phonemes and the use of natural prosodic variations. In general, CVC synthesis unit shows smaller acoustic degradation of speech quality since concatenation points are located in the consonant region and it can properly model the coarticulation of vowels that are effected by surrounding consonants. In this paper, we analyze the characteristics and the number of required synthesis units of 4 types of speech synthesis methods that use CVC synthesis units. Furthermore, we compare the speech quality of the 4 types and propose a new synthesis method based on the most promising type in terms of speech quality and implementability. Then we implement the method using the speech corpus and synthesize various examples. The CVC speech segments that are not in the speech corpus are substituted by demonstrate speech segments. Experiments demonstrate that CVC speech segments extracted from about 100 Mbytes continuous speech corpus can produce high quality synthetic speech.

  • PDF

Robust Distributed Speech Recognition under noise environment using MESS and EH-VAD (멀티밴드 스펙트럼 차감법과 엔트로피 하모닉을 이용한 잡음환경에 강인한 분산음성인식)

  • Choi, Gab-Keun;Kim, Soon-Hyob
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.1
    • /
    • pp.101-107
    • /
    • 2011
  • The background noises and distortions by channel are major factors that disturb the practical use of speech recognition. Usually, noise reduce the performance of speech recognition system DSR(Distributed Speech Recognition) based speech recognition also bas difficulty of improving performance for this reason. Therefore, to improve DSR-based speech recognition under noisy environment, this paper proposes a method which detects accurate speech region to extract accurate features. The proposed method distinguish speech and noise by using entropy and detection of spectral energy of speech. The speech detection by the spectral energy of speech shows good performance under relatively high SNR(SNR 15dB). But when the noise environment varies, the threshold between speech and noise also varies, and speech detection performance reduces under low SNR(SNR 0dB) environment. The proposed method uses the spectral entropy and harmonics of speech for better speech detection. Also, the performance of AFE is increased by precise speech detections. According to the result of experiment, the proposed method shows better recognition performance under noise environment.

Unproved Speech Enhancement Algorithm employing Multi-band Power Subtraction and Wavelet Packets Decomposition (Multi-band Power Subtraction과 Wavelet Packets Decomposition을 이용한 개선된 음성 향상 방법)

  • Lee Yoon-Chang;Kwak Jeong-Hoon;Ahn Sang-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.6C
    • /
    • pp.589-602
    • /
    • 2006
  • 잡음은 음성과 관련된 시스템의 성능을 제한하는 주된 원인이기 때문에 음성향상과 관련된 연구는 꾸준히 계속되어왔다. 전통적인 음성향상 방법은 무성음과 잡음을 구분하지 알기 때문에 잡음제거 과정에서 무성음이 함께 제거되는 단점이 있으며, 웨이블릿 기반의 전통적인 잡음제거 방법은 각 대역마다 동일한 문턱값을 사용하기 때문에 시변 환경에서 성능이 떨어지는 단점이 있다. 이 단점들을 개선하기위해 다중대역 파워 차감법과 Perceptual 웨이블릿 패킷 분해를 이용한 웨이블릿 기반의 개선된 음성향상 방법을 제안한다. 전처리 과정으로 다중대역 파워 차감법을 사용하여 광대역 잡음을 제거하고 뮤지컬 잡음의 발생을 줄이며, psycho-acoustic 모델 기반 Perceptual 웨이블릿 패킷으로 신호를 분해한 후 각 웨이블릿 노드의 엔트로피 비율과 음성검출을 이용하여 무성음/유성음/잡음을 구분한다. 구분된 신호에 따라 각 웨이블릿 노드마다의 문턱값을 기준으로 웨이블릿 Shrinkage를 적용하여 잡음을 제거하고 무성음이나 파워가 작은 유성음이 제거되는 오류를 최소화한다. 또한 잡음 파워 추정 과정에 적응적으로 망각 계수를 선택하여 잡음 파워 추정 오류를 최소화한다.

A single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for push-to-talk communication (Push-to-talk 통신을 위한 진폭 및 위상 복원 기반의 단일 채널 음성 향상 방식)

  • Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.64-69
    • /
    • 2017
  • In this paper, we propose a single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for PTT (Push-To-Talk) communication. The proposed method combines the spectral amplitude and phase enhancement to provide high-quality speech unlike other single-channel speech enhancement methods which only use spectral amplitudes. We carried out side-by-side comparison experiment in various non-stationary noise environments in order to evaluate the performance of the proposed method. The experimental results show that the proposed method provides high quality speech better than other methods under different noise conditions.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

Strategy for Implementing A Voice Web Browser Based WIPI (WIPI기반 음성 웹브라우저 구현 방안)

  • Yu Se-Young;Kim Byung-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.501-504
    • /
    • 2006
  • 인터넷 및 휴대폰들이 일반화되고 음성처리 기술이 실용화 단계로 발전함에 따라 음성 응용분야가 새로운 이슈로 떠오르고 있다. 음성처리 기술은 사람의 말을 알아들을 수 있는 귀와 사람에게 말을 할 수 있는 입을 마련해주는 새로운 분야다. 그리고, 음성으로 웹의 컨텐츠를 개발하기 위한 표준 언어인 VoiceXML, SALT가 빠르게 보급되고 있다. 음성인식과 음성합성 기술이 꾸준히 발전하여 음성 포털 서비스나 자동 음성 안내 시스템 등에 음성인식과 음성합성 기술이 채택되는 등 상용화 수준에 이르렀다. 사람에게 가장 편리한 정보 습득 방법은 음성이고 이러한 음성을 적용한 음성 웹 브라우저를 현재 유선 상에서 사용하고 있다. 하지만 아직까지 무선 플랫폼에 적용하여 사용하는 브라우저는 개발되지 않고 있다. 사용자에게 친숙한 무선인터넷 환경을 제공하고자 무선 음성 웹 브라우저를 구현방안을 제시하고자 한다.

  • PDF

Speech Based Multimodal Interface Technologies and Standards (음성기반 멀티모달 인터페이스 및 표준)

  • Hong Ki-Hyung
    • MALSORI
    • /
    • no.51
    • /
    • pp.117-135
    • /
    • 2004
  • In this paper, we introduce the multimodal user interface technology, especially based on speech. We classify multimodal interface technologies into four classes: sequential, alternate, supplementary, and semantic multimodal interfaces. After introducing four types of multimodal interfaces, we explain standard activities currently being activated.

  • PDF