• 제목/요약/키워드: Speech Quality

검색결과 803건 처리시간 0.027초

An Introduction to Energy-Based Blind Separating Algorithm for Speech Signals

  • Mahdikhani, Mahdi;Kahaei, Mohammad Hossein
    • ETRI Journal
    • /
    • 제36권1호
    • /
    • pp.175-178
    • /
    • 2014
  • We introduce the Energy-Based Blind Separating (EBS) algorithm for extremely fast separation of mixed speech signals without loss of quality, which is performed in two stages: iterative-form separation and closed-form separation. This algorithm significantly improves the separation speed simply due to incorporating only some specific frequency bins into computations. Simulation results show that, on average, the proposed algorithm is 43 times faster than the independent component analysis (ICA) for speech signals, while preserving the separation quality. Also, it outperforms the fast independent component analysis (FastICA), the joint approximate diagonalization of eigenmatrices (JADE), and the second-order blind identification (SOBI) algorithm in terms of separation quality.

G.733.1 MP-MLQ 고정 코드북 검색 시간 단축에 관한 연구 (The Research of Reducing the Fixed Codebook Search Time of G.723.1 MP-MLQ)

  • 김정진;장경아;목진덕;배명진;홍성훈;성유나
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 추계종합학술대회 논문집
    • /
    • pp.1131-1134
    • /
    • 1999
  • In general CELP type vocoders provide good speech quality around 4.8kbps. Among them, G.723.1 developed for Internet Phone and videoconferencing includes two vocoders, 5.3kbps ACELP and 6.3kbps MO-MLQ. Since 6.3kbps MP-MLQ requires large amount of computation for fixed codebook search, it is difficult to realize real time processing. In order to improve the problem this paper proposes the new method that reduces the processing time up to about 50% of codebook search time. We first decide the grid bit, then search the codebook. Grid bit is selected by comparison between synthetic speech, which is synthesized with only odd or even pulses of target vector. and DC removed original speech. As a result, we reduced the total processing time of G.723.1 MP-MLQ up to about 26.08%. In objective quality test 11.19㏈ of segSNR was obtained, and in subjective quality test there was almost no speech degradation.

  • PDF

규칙 합성음의 객관적 품질평가에 관한 연구 (A Study on Objective Quality Assessment of Synthesized Speech by Rule)

  • 홍진우
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1991년도 학술발표회 논문집
    • /
    • pp.67-72
    • /
    • 1991
  • This paper evaluates thequality of synthesized speech by rule using the LPC CD in the objective measure and then compares the result with the subjective analysis. By evaluating the quality of synthesized speech by rule objectively. We have tried to resolve the problems (Evaluation time or size expansion, variables within the analysis results) that arise when the evaluation is done subjectively. Also by comparing intelligibility-the index for the subjective quality evaluation of synthesized speech by rule-with evaluation results obtained using MOS and the objective evaluation. We have proved the validity of the objective analysis and thus provides a guide that would be useful when R&D and marketing of synthesis by rule method is done.

  • PDF

규칙 합성음의 이해성 평가를 위한 단어표 구성 및 실험법 (A Word List Construction and Measurement Method for Intelligibility Assessment of Synthesized Speech by Rule)

  • 김성한;홍진우;김순협
    • 전자공학회논문지B
    • /
    • 제29B권1호
    • /
    • pp.43-49
    • /
    • 1992
  • As a result of recent progress in speech synthesis techniques, the those new services using new techniques are going to introduce into the telephone communication system. In setting standards, voice quality is obviously an important criterion. It is very important to develope a quality evaluation method of synthesized speech for the diagnostic assessment of system algorithm, and fair comparison of assessment values. This paper has described several basic concepts and criterions for quality assessment (intelligibility) of synthesized speech by rule, and then a word selection method and the word list to be used in word intelligibility test were proposed. Finally, a test method for word intelligibility is described.

  • PDF

PESQ-Based Selection of Efficient Partial Encryption Set for Compressed Speech

  • Yang, Hae-Yong;Lee, Kyung-Hoon;Lee, Sang-Han;Ko, Sung-Jea
    • ETRI Journal
    • /
    • 제31권4호
    • /
    • pp.408-418
    • /
    • 2009
  • Adopting an encryption function in voice over Wi-Fi service incurs problems such as additional power consumption and degradation of communication quality. To overcome these problems, a partial encryption (PE) algorithm for compressed speech was recently introduced. However, from the security point of view, the partial encryption sets (PESs) of the conventional PE algorithm still have much room for improvement. This paper proposes a new selection method for finding a smaller PES while maintaining the security level of encrypted speech. The proposed PES selection method employs the perceptual evaluation of the speech quality (PESQ) algorithm to objectively measure the distortion of speech. The proposed method is applied to the ITU-T G.729 speech codec, and content protection capability is verified by a range of tests and a reconstruction attack. The experimental results show that encrypting only 20% of the compressed bitstream is sufficient to effectively hide the entire content of speech.

Optimum MVF Estimation-Based Two-Band Excitation for HMM-Based Speech Synthesis

  • Han, Seung-Ho;Jeong, Sang-Bae;Hahn, Min-Soo
    • ETRI Journal
    • /
    • 제31권4호
    • /
    • pp.457-459
    • /
    • 2009
  • The optimum maximum voiced frequency (MVF) estimation-based two-band excitation for hidden Markov model-based speech synthesis is presented. An analysis-by-synthesis scheme is adopted for the MVF estimation which leads to the minimum spectral distortion of synthesized speech. Experimental results show that the proposed method significantly improves synthetic speech quality.

공동 이용을 위한 음성 인식 및 합성용 음성코퍼스의 발성 목록 설계 (Design of Linguistic Contents of Speech Copora for Speech Recognition and Synthesis for Common Use)

  • 김연화;김형주;김봉완;이용주
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.89-99
    • /
    • 2002
  • Recently, researches into ways of improving large vocabulary continuous speech recognition and speech synthesis are being carried out intensively as the field of speech information technology is progressing rapidly. In the field of speech recognition, developments of stochastic methods such as HMM require large amount of speech data for training, and also in the field of speech synthesis, recent practices show that synthesis of better quality can be produced by selecting and connecting only the variable size of speech data from the large amount of speech data. In this paper we design and discuss linguistic contents for speech copora for speech recognition and synthesis to be shared in common.

  • PDF

MPEG-4 TTS (Text-to-Speech)

  • 한민수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.699-707
    • /
    • 1999
  • It cannot be argued that speech is the most natural interfacing tool between men and machines. In order to realize acceptable speech interfaces, highly advanced speech recognizers and synthesizers are inevitable. Text-to-Speech(TTS) technology has been attracting a lot of interest among speech engineers because of its own benefits. Namely, the possible application areas of talking computers, emergency alarming systems in speech, speech output devices fur speech-impaired, and so on. Hence, many researchers have made significant progresses in the speech synthesis techniques in the sense of their own languages and as a result, the quality of currently available speech synthesizers are believed to be acceptable to normal users. These are partly why the MPEG group had decided to include the TTS technology as one of its MPEG-4 functionalities. ETRI has made major contributions to the current MPEG-4 TTS among various MPEG-4 functionalities. They are; 1) use of original prosody for synthesized speech output, 2) trick mode functions fer general users without breaking synthesized speech prosody, 3) interoperability with Facial Animation(FA) tools, and 4) dubbing a moving/animated picture with lib-shape pattern information.

  • PDF

모바일 VoIP 음성통신을 위한 대화음질 측정 시스템 (Conversational Quality Measurement System for Mobile VoIP Speech Communication)

  • 조재만;김형국
    • 한국ITS학회 논문지
    • /
    • 제10권4호
    • /
    • pp.71-77
    • /
    • 2011
  • 본 논문에서는 고품질 모바일 VoIP 음성통신에 대한 객관적인 QoS를 제공하는 대화음질 측정시스템을 구현하였다. 대화음질 측정을 위해서 VoIP로 연결된 두 대의 스마트폰에 에코 및 잡음 제거, 음성 인코딩 및 디코딩, RTP (Real-TimeProtocol)을 적용한 패킷 생성, 지터버퍼 콘트롤, LC (Loss Concealment)를 포함한 POS (Play-out Schedule)로 구성된 VoIP음성 통화시스템을 구현하였다. 대화음질 측정 시스템은 VoIP로 연결된 두 스마트폰의 마이크, 그리고 스피커와 연결되어 각 화자별로 음성신호를 녹음한 후에, 녹음된 음성신호를 이용하여 CE (Conversational Efficiency), CS (Conversational Symmetry) 및 PESQ (Perceptual Evaluation of Speech Quality)를 측정하고, CE-CS-PESQ에 대한 상관관계를 측정한다. 본 논문에서는 다양한 SNR, IP 네트워크망 변동에 따른 지연, 손실 변화에 따른 CE, CS, PESQ를 측정하여 대화음질 측정시스템을 검증하였다.

A Study on the Impact of Speech Data Quality on Speech Recognition Models

  • Yeong-Jin Kim;Hyun-Jong Cha;Ah Reum Kang
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권1호
    • /
    • pp.41-49
    • /
    • 2024
  • 현재 음성인식 기술은 꾸준히 발전하고 다양한 분야에서 널리 사용되고 있다. 본 연구에서는 음성 데이터 품질이 음성인식 모델에 미치는 영향을 알아보기 위해 데이터셋을 전체 데이터셋과 SNR 상위 70%의 데이터셋으로 나눈 후 Seamless M4T와 Google Cloud Speech-to-Text를 이용하여 각 모델의 텍스트 변환 결과를 확인하고 Levenshtein Distance를 사용하여 평가하였다. 실험 결과에서 Seamless M4T는 높은 SNR(신호 대 잡음비)을 가진 데이터를 사용한 모델에서 점수가 13.6으로 전체 데이터셋의 점수인 16.6보다 더 낮게 나왔다. 그러나 Google Cloud Speech-to-Text는 전체 데이터셋에서 8.3으로 높은 SNR을 가진 데이터보다 더 낮은 점수가 나왔다. 이는 새로운 음성인식 모델을 훈련할 때 SNR이 높은 데이터를 사용하는 것이 영향이 있다고 할 수 있으며, Levenshtein Distance 알고리즘이 음성인식 모델을 평가하기 위한 지표 중 하나로 쓰일 수 있음을 나타낸다.