• 제목/요약/키워드: Speech Processing

검색결과 950건 처리시간 0.031초

음성/영상 연동성능 향상을 위한 입술움직임 영상 추적 테스트 환경 구축 (A Lip Movement Image Tracing Test Environment Build-up for the Speech/Image Interworking Performance Enhancement)

  • 이수종;박준;김응규
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 춘계학술발표대회
    • /
    • pp.328-329
    • /
    • 2007
  • 본 논문은 로봇과 같이 외부 음향잡음에 노출되어 있는 상황 하에서, 대면하고 있는 사람이 입술을 움직여 발성하는 경우에만 음성인식 기능이 수행되도록 하기 위한 방안의 일환으로, 입술움직임 영상을 보다 정확히 추적하기 위한 테스트 환경 구현에 관한 것이다. 음성구간 검출과정에서 입술움직임 영상 추적결과의 활용여부는 입술움직임을 얼마나 정확하게 추적할 수 있느냐에 달려있다. 이를 위해 영상 프레임율 동적 제어, 칼라/이진영상 변환, 순간 캡쳐, 녹화 및 재생기능을 구현함으로써, 다각적인 방향에서 입술움직임 영상 추적기능을 확인해 볼 수 있도록 하였다. 음성/영상기능을 연동시킨 결과 약 99.3%의 연동성공율을 보였다.

  • PDF

Subspace distribution clustering hidden Markov model을 위한 codebook design (Codebook design for subspace distribution clustering hidden Markov model)

  • 조영규;육동석
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 춘계 학술대회 발표논문집
    • /
    • pp.87-90
    • /
    • 2005
  • Today's state-of the-art speech recognition systems typically use continuous distribution hidden Markov models with the mixtures of Gaussian distributions. To obtain higher recognition accuracy, the hidden Markov models typically require huge number of Gaussian distributions. Such speech recognition systems have problems that they require too much memory to run, and are too slow for large applications. Many approaches are proposed for the design of compact acoustic models. One of those models is subspace distribution clustering hidden Markov model. Subspace distribution clustering hidden Markov model can represent original full-space distributions as some combinations of a small number of subspace distribution codebooks. Therefore, how to make the codebook is an important issue in this approach. In this paper, we report some experimental results on various quantization methods to make more accurate models.

  • PDF

연속음성에서 천이구간의 탐색, 추출, 근사합성에 관한 연구 (A Study on a Searching, Extraction and Approximation-Synthesis of Transition Segment in Continuous Speech)

  • 이시우
    • 한국정보처리학회논문지
    • /
    • 제7권4호
    • /
    • pp.1299-1304
    • /
    • 2000
  • In a speed coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech quality in case coexist with a voiced and an unvoiced consonants in a frame. So, I propose TSIUVC(Transition Segment Including UnVoiced Consonant) searching, extraction ad approximation-synthesis method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This method based on a zerocrossing rate and pitch detector using FIR-STREAK Digital Filter. As a result, the extraction rates of TSIUVC are 84.8% (plosive), 94.9%(fricative), 92.3%(affricative) in female voice, and 88%(plosive), 94.9%(fricative), 92.3%(affricative) in male voice respectively, Also, I obain a high quality approximation-synthesis waveforms within TSIUVC by using frequency information of 0.547kHz below and 2.813kHz above. This method has the capability of being applied to speech coding of low bit rate, speech analysis and speech synthesis.

  • PDF

Landmark-Guided Segmental Speech Decoding for Continuous Mandarin Speech Recognition

  • Chao, Hao;Song, Cheng
    • Journal of Information Processing Systems
    • /
    • 제12권3호
    • /
    • pp.410-421
    • /
    • 2016
  • In this paper, we propose a framework that attempts to incorporate landmarks into a segment-based Mandarin speech recognition system. In this method, landmarks provide boundary information and phonetic class information, and the information is used to direct the decoding process. To prove the validity of this method, two kinds of landmarks that can be reliably detected are used to direct the decoding process of a segment model (SM) based Mandarin LVCSR (large vocabulary continuous speech recognition) system. The results of our experiment show that about 30% decoding time can be saved without an obvious decrease in recognition accuracy. Thus, the potential of our method is demonstrated.

PDA상에서 음성인식을 이용한 차량번호 조회시스템 (A car number retrieving system using speech recognition for PDA)

  • 김우성;김동환;윤재선;홍광석
    • 융합신호처리학회 학술대회논문집
    • /
    • 한국신호처리시스템학회 2001년도 하계 학술대회 논문집(KISPS SUMMER CONFERENCE 2001
    • /
    • pp.281-284
    • /
    • 2001
  • 본 논문에서는 PDA상에서 음성인식과 합성을 통하여 차량 번호를 조회할 수 있는 시스템을 구현하였다. 차량번호 인식을 위한 4연속 숫자음과 명령어 인식부분, 그리고 각 단계별로 합성된 음성을 들려주도록 구성하였다. 본 연구의 인식시스템은 화자독립으로 실험을 하였으며, 여러화자에 대한 4연속 차량번호 인식률과 명령어에 대한 인식률은 각각 97%, 99%가 나왔다.

  • PDF

Fixed Point Implementation of the QCELP Speech Coder

  • Yoon, Byung-Sik;Kim, Jae-Won;Lee, Won-Myoung;Jang, Seok-Jin;Choi, Song_in;Lim, Myoung-Seon
    • ETRI Journal
    • /
    • 제19권3호
    • /
    • pp.242-258
    • /
    • 1997
  • The Qualcomm code excited linear prediction (QCELP) speech coder was adopted to increase the capacity of the CDMA Mobile System (CMS). In this paper, we implemented the QCELP speech coding algorithm by using TMS320C50 fixed point DSP chip. Also the fixed point simulation was done with C language. The computation complexity of QCELP on TMS320C50 was 10k words and data memory was 4k words. In the normal call test on the CMS, where mobile to mobile call test was done in the bypass mode without double vocoding, mean opinion score for the speech quality was he Qualcomm code excited linear prediction (QCELP) speech quality was 3.11.

  • PDF

GMM을 이용한 프레임 단위 분류에 의한 우리말 음성의 분할과 인식 (Korean Speech Segmentation and Recognition by Frame Classification via GMM)

  • 권호민;한학용;고시영;허강인
    • 융합신호처리학회 학술대회논문집
    • /
    • 한국신호처리시스템학회 2003년도 하계학술대회 논문집
    • /
    • pp.18-21
    • /
    • 2003
  • In general it has been considered to be the difficult problem that we divide continuous speech into short interval with having identical phoneme quality. In this paper we used Gaussian Mixture Model (GMM) related to probability density to divide speech into phonemes, an initial, medial, and final sound. From them we peformed continuous speech recognition. Decision boundary of phonemes is determined by algorithm with maximum frequency in a short interval. Recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme divided by eye-measurement. For the experiments result we confirmed that the method we presented is relatively superior in auto-segmentation in korean speech.

  • PDF

한국어 기반 음성 인식에서 사투리 표현에 관한 연구 (A Study on Dialect Expression in Korean-Based Speech Recognition)

  • 이신협
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.333-335
    • /
    • 2022
  • 음성인식 처리기술의 발전은 STT, TTS 기술과 함께 각종 동영상, 스트리밍 서비스에서 적용되어 사용되고 있다. 그러나 실제 대화내용의 음성인식은 사투리 사용과 불용어, 감탄사, 유사어의 중복 등으로 명료한 문어체적 표현에 장벽이 높은 편이다. 본 연구에서는 음성인식에 모호한 사투리에 대해 범주별 사투리 중요 단어 사전 처리 방식과 사투리 운율을 음성 인식 네트워크 모델 속성으로 적용한 음성인식기술을 제안한다.

  • PDF

A Study on DNN-based STT Error Correction

  • Jong-Eon Lee
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.171-176
    • /
    • 2023
  • This study is about a speech recognition error correction system designed to detect and correct speech recognition errors before natural language processing to increase the success rate of intent analysis in natural language processing with optimal efficiency in various service domains. An encoder is constructed to embedded the correct speech token and one or more error speech tokens corresponding to the correct speech token so that they are all located in a dense vector space for each correct token with similar vector values. One or more utterance tokens within a preset Manhattan distance based on the correct utterance token in the dense vector space for each embedded correct utterance token are detected through an error detector, and the correct answer closest to the detected error utterance token is based on the Manhattan distance. Errors are corrected by extracting the utterance token as the correct answer.

정상 음성의 목소리 특성의 정성적 분류와 음성 특징과의 상관관계 도출 (Qualitative Classification of Voice Quality of Normal Speech and Derivation of its Correlation with Speech Features)

  • 김정민;권철홍
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.71-76
    • /
    • 2014
  • In this paper voice quality of normal speech is qualitatively classified by five components of breathy, creaky, rough, nasal, and thin/thick voice. To determine whether a correlation exists between a subjective measure of voice and an objective measure of voice, each voice is perceptually evaluated using the 1/2/3 scale by speech processing specialists and acoustically analyzed using speech analysis tools such as the Praat, MDVP, and VoiceSauce. The speech parameters include features related to speech source and vocal tract filter. Statistical analysis uses a two-independent-samples non-parametric test. Experimental results show that statistical analysis identified a significant correlation between the speech feature parameters and the components of voice quality.