• Title/Summary/Keyword: 음성효율

Search Result 871, Processing Time 0.024 seconds

An efficient video multiplexer for the transmission of the DMB multimedia data (DMB 멀티미디어 데이터의 전송을 위한 효율적인 비디오 다중화기)

  • Na Nam-Woong;Baek Sun-Hye;Hong Sung-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • DMB(Digital Multimedia Broadcasting)는 유럽의 디지털 오디오 방송규격인 Eureka-147 DAB(Digital Audio Broadcasting) 전송시스템을 기반으로 하여 동영상 및 음성, 문자데이터 등을 포함한 멀티미디어 서비스를 제공하기 위한 새로운 방송표준이다 따라서 DMB 시스템은 Eureka-147 DAB 전송부 이외에 영상 및 음성을 압축하는 미디어압축 (복)부호화부, 압축된 미디어 스트림을 다중화 하는 비디오 (역)다중화부가 추가된 구조를 갖는다. 본 논문은 DMB 표준의 비디오 다중화부의 분석을 통하여 확장된 전송기능 및 높은 전송효율을 제공할 수 있는 새로운 비디오 다중화 구조를 제시한다. 또한 표준 비디오 다중화기와 제안된 비디오 다중화기의 성능평가를 위해 기능적으로 분석하고 시뮬레이션을 통해 전송효율을 측정하였다.

  • PDF

A Comparative Study of Speech Parameters for Speech Recognition Neural Network (음성 인식 신경망을 위한 음성 파라키터들의 성능 비교)

  • Kim, Ki-Seok;Im, Eun-Jin;Hwang, Hee-Yung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.61-66
    • /
    • 1992
  • There have been many researches that uses neural network models for automatic speech recognition, but the main trend was finding the neural network models and learning rules appropriate to automatic speech recognition. However, the choice of the input speech parameter for the neural network as well as neural network model itself is a very important factor for the improvement of performance of the automatic speech recognition system using neural network. In this paper we select 6 speech parameters from surveys of the speech recognition papers which uses neural networks, and analyze the performance for the same data and the same neural network model. We use 8 sets of 9 Korean plosives and 18 sets of 8 Korean vowels. We use recurrent neural network and compare the performance of the 6 speech parameters while the number of nodes is constant. The delta cepstrum of linear predictive coefficients showed best result and the recognition rates are 95.1% for the vowels and 100.0% for plosives.

  • PDF

Noise Elimination Using Improved MFCC and Gaussian Noise Deviation Estimation

  • Sang-Yeob, Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.87-92
    • /
    • 2023
  • With the continuous development of the speech recognition system, the recognition rate for speech has developed rapidly, but it has a disadvantage in that it cannot accurately recognize the voice due to the noise generated by mixing various voices with the noise in the use environment. In order to increase the vocabulary recognition rate when processing speech with environmental noise, noise must be removed. Even in the existing HMM, CHMM, GMM, and DNN applied with AI models, unexpected noise occurs or quantization noise is basically added to the digital signal. When this happens, the source signal is altered or corrupted, which lowers the recognition rate. To solve this problem, each voice In order to efficiently extract the features of the speech signal for the frame, the MFCC was improved and processed. To remove the noise from the speech signal, the noise removal method using the Gaussian model applied noise deviation estimation was improved and applied. The performance evaluation of the proposed model was processed using a cross-correlation coefficient to evaluate the accuracy of speech. As a result of evaluating the recognition rate of the proposed method, it was confirmed that the difference in the average value of the correlation coefficient was improved by 0.53 dB.

An acoustic study of feeling information extracting method (음성을 이용한 감정 정보 추출 방법)

  • Lee, Yeon-Soo;Park, Young-B.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.51-55
    • /
    • 2010
  • Tele-marketing service has been provided through voice media in a several places such as modern call centers. In modern call centers, they are trying to measure their service quality, and one of the measuring method is a extracting speaker's feeling information in their voice. In this study, it is proposed to analyze speaker's voice in order to extract their feeling information. For this purpose, a person's feeling is categorized by analyzing several types of signal parameters in the voice signal. A person's feeling can be categorized in four different states: joy, sorrow, excitement, and normality. In a normal condition, excited or angry state can be major factor of service quality. In this paper, it is proposed to select a conversation with problems by extracting the speaker's feeling information based on pitches and amplitudes of voice.

On the Perceptually Important Phase Information in Acoustic Signal (인지에 중요한 음향신호의 위상에 대해)

    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.7
    • /
    • pp.28-33
    • /
    • 2000
  • For efficient quantization of speech representation, it is common to incorporate Perceptual characteristics of human hearing. However, the focus has been confined only to the magnitude information of speech, and little attention has been paid to phase information. This paper presents a novel approach, termed perceptually irrelevant phase elimination (PIPE), to find out irrelevant phase information of acoustic signals in terms of perception. The proposed method, which is based on the observation that the relative phase relationship within a critical band is perceptually important, is derived not only for stationary Fourier signal but also for harmonic signal. The proposed method is incorporated into the analysis/synthesis system based on harmonic representation of speech, and subjective test results demonstrate the effectiveness of proposed method.

  • PDF

Spoken Dialogue Management System based on Word Spotting (단어추출을 기반으로 한 음성 대화처리 시스템)

  • Song, Chang-Hwan;Yu, Ha-Jin;Oh, Yung-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.313-317
    • /
    • 1994
  • 본 연구에서는 인간과 컴퓨터 사이의 음성을 이용한 대화 시스템을 구현하였다. 특별히 음성을 인식하는데 있어서 단어추출(word apotting) 방법을 사용하는 경우에 알맞은 의미 분석 방법과 도표 형태의 규칙을 기반으로 하여 시스템의 응답을 생성하는 방법에 대하여 연구하였다. 단어추출 방법을 사용하여 음성을 인식하는 경우에는 형태소분석 및 구문분석의 과정을 이용하여 사용자의 발화 의도를 분석하기 어려우므로 새로운 의미분석 방법을 필요로 한다. 본 연구에서는 퍼지 관계를 사용하여 사용자의 발화 의도를 파악하는 새로운 의미분석 방법을 제안하였다. 그리고, 사용자의 발화 의도에 적절한 시스템의 응답을 만들고 응답의 내용을 효율적으로 관리하기 위한 방범으로 현재의 상태와 사용자의 의도에 따른 응답 규칙을 만들었다. 이 규칙은 도표의 형태로 구현되어 규칙의 갱신 및 확장을 편리하게 만들었다. 대화의 영역은 열차 예매에 관련된 예매, 취소, 문의 및 관광지 안내로 제안하였다. 음성의 오인식에 의한 오류에 적절히 대처하기 위해 시스템의 응답은 확인 및 수정 과정을 포함하고 있다. 본 시스템은 문자 입력과 음성 입력으로 각각 실험한 결과, 사용자는 시스템의 도움을 받아 자신이 의도하는 목적을 달성할 수 있었다.

  • PDF

Signal Subspace-based Voice Activity Detection Using Generalized Gaussian Distribution (일반화된 가우시안 분포를 이용한 신호 준공간 기반의 음성검출기법)

  • Um, Yong-Sub;Chang, Joon-Hyuk;Kim, Dong Kook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.131-137
    • /
    • 2013
  • In this paper we propose an improved voice activity detection (VAD) algorithm using statistical models in the signal subspace domain. A uncorrelated signal subspace is generated using embedded prewhitening technique and the statistical characteristics of the noisy speech and noise are investigated in this domain. According to the characteristics of the signals in the signal subspace, a new statistical VAD method using GGD (Generalized Gaussian Distribution) is proposed. Experimental results show that the proposed GGD-based approach outperforms the Gaussian-based signal subspace method at 0-15 dB SNR simulation conditions.

Design and Implementation of the Mobile Lecture Support System (모바일 기반 강의 지원 시스템 설계 및 구현)

  • Kim, JunSik;Choi, YoungGil;Park, Suhyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.457-459
    • /
    • 2016
  • This lecture support system can be used to smoothly transfer the sound in the auditorium or the particular outdoor space. Commercial lecture support system is useful but very expensive. Therefore in this paper we have designed and developed lecture support system using a mobile phone. We can exactly hear the lecture in difficult situation using this system. The system provides the ability to save a lecture in the storage, so we can hear the lecture repeatedly.

  • PDF

A Study on Numeral Speech Recognition Using Integration of Speech and Visual Parameters under Noisy Environments (잡음환경에서 음성-영상 정보의 통합 처리를 사용한 숫자음 인식에 관한 연구)

  • Lee, Sang-Won;Park, In-Jung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.3
    • /
    • pp.61-67
    • /
    • 2001
  • In this paper, a method that apply LP algorithm to image for speech recognition is suggested, using both speech and image information for recogniton of korean numeral speech. The input speech signal is pre-emphasized with parameter value 0.95, analyzed for B th LP coefficients using Hamming window, autocorrelation and Levinson-Durbin algorithm. Also, a gray image signal is analyzed for 2-dimensional LP coefficients using autocorrelation and Levinson-Durbin algorithm like speech. These parameters are used for input parameters of neural network using back-propagation algorithm. The recognition experiment was carried out at each noise level, three numeral speechs, '3','5', and '9' were enhanced. Thus, in case of recognizing speech with 2-dimensional LP parameters, it results in a high recognition rate, a low parameter size, and a simple algorithm with no additional feature extraction algorithm.

  • PDF

A Basic Performance Evaluation of the Speech Recognition APP of Standard Language and Dialect using Google, Naver, and Daum KAKAO APIs (구글, 네이버, 다음 카카오 API 활용앱의 표준어 및 방언 음성인식 기초 성능평가)

  • Roh, Hee-Kyung;Lee, Kang-Hee
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.819-829
    • /
    • 2017
  • In this paper, we describe the current state of speech recognition technology and identify the basic speech recognition technology and algorithms first, and then explain the code flow of API necessary for speech recognition technology. We use the application programming interface (API) of Google, Naver, and Daum KaKao, which have the most famous search engine among the speech recognition APIs, to create a voice recognition app in the Android studio tool. Then, we perform a speech recognition experiment on people's standard words and dialects according to gender, age, and region, and then organize the recognition rates into a table. Experiments were conducted on the Gyeongsang-do, Chungcheong-do, and Jeolla-do provinces where the degree of tongues was severe. And Comparative experiments were also conducted on standardized dialects. Based on the resultant sentences, the accuracy of the sentence is checked based on spacing of words, final consonant, postposition, and words and the number of each error is represented by a number. As a result, we aim to introduce the advantages of each API according to the speech recognition rate, and to establish a basic framework for the most efficient use.