• 제목/요약/키워드: Speech processing

검색결과 957건 처리시간 0.024초

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • 제39권1호
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.

Maximum mutual information estimation을 이용한 linear spectral transformation 기반의 adaptation (Maximum mutual information estimation linear spectral transform based adaptation)

  • 유봉수;김동현;육동석
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 춘계 학술대회 발표논문집
    • /
    • pp.53-56
    • /
    • 2005
  • In this paper, we propose a transformation based robust adaptation technique that uses the maximum mutual information(MMI) estimation for the objective function and the linear spectral transformation(LST) for adaptation. LST is an adaptation method that deals with environmental noises in the linear spectral domain, so that a small number of parameters can be used for fast adaptation. The proposed technique is called MMI-LST, and evaluated on TIMIT and FFMTIMIT corpora to show that it is advantageous when only a small amount of adaptation speech is used.

  • PDF

가변어휘 핵심어 검출을 위한 비핵심어 모델링 및 후처리 성능평가 (Performance Evaluation of Nonkeyword Modeling and Postprocessing for Vocabulary-independent Keyword Spotting)

  • 김형순;김영국;신영욱
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.225-239
    • /
    • 2003
  • In this paper, we develop a keyword spotting system using vocabulary-independent speech recognition technique, and investigate several non-keyword modeling and post-processing methods to improve its performance. In order to model non-keyword speech segments, monophone clustering and Gaussian Mixture Model (GMM) are considered. We employ likelihood ratio scoring method for the post-processing schemes to verify the recognition results, and filler models, anti-subword models and N-best decoding results are considered as an alternative hypothesis for likelihood ratio scoring. We also examine different methods to construct anti-subword models. We evaluate the performance of our system on the automatic telephone exchange service task. The results show that GMM-based non-keyword modeling yields better performance than that using monophone clustering. According to the post-processing experiment, the method using anti-keyword model based on Kullback-Leibler distance and N-best decoding method show better performance than other methods, and we could reduce more than 50% of keyword recognition errors with keyword rejection rate of 5%.

  • PDF

청각 장애자를 위한 시각 음성 처리 시스템에 관한 연구 (A Study on the Visible Speech Processing System for the Hearing Impaired)

  • 김원기;김남현;유선국;정성현
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1990년도 춘계학술대회
    • /
    • pp.57-61
    • /
    • 1990
  • The purpose of this study is to help the hearing impaired's speech training with a visible speech processing system. In brief, this system converts the features of speech signals into graphics on monitor, and adjusts the features of hearing impaired to normal ones. There are form ant and pitch in the features used for this system. They are extracted using the digital signal processing such as linear prediotive method or AMDF(Average Magnitude Difference Function). In order to effectively train for the hearing impaired's abnormal speech, easilly visible feature has been being studied.

  • PDF

감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법 (Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach)

  • 에드워드 카야디;송미화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

유성음 구간 검출 알고리즘에 관한 연구 (A Novel Algorithm for Discrimination of Voiced Sounds)

  • 장규철;우수영;유창동
    • 음성과학
    • /
    • 제9권3호
    • /
    • pp.35-45
    • /
    • 2002
  • A simple algorithm for discriminating voiced sounds in a speech is proposed. In addition to low-frequency energy and zero-crossing rate (ZCR), both of which have been widely used in the past for identifying voiced sounds, the proposed algorithm incorporates pitch variation to improve the discrimination rate. Based on TIMIT corpus, evaluation result shows an improvement of 13% in the discrimination of voiced phonemes over that of the traditional algorithm using only energy and ZCR.

  • PDF

입술 움직임 영상 선호를 이용한 음성 구간 검출 (Speech Activity Detection using Lip Movement Image Signals)

  • 김응규
    • 융합신호처리학회논문지
    • /
    • 제11권4호
    • /
    • pp.289-297
    • /
    • 2010
  • 본 논문에서는 음성인식을 위한 음성구간 검출과정에서 유입될 수 있는 동적인 음향에너지 이외에 화자의 입술움직임 영상신호까지 확인함으로써 외부 음향잡음이 음성인식 대상으로 오인식되는 것을 방지하기 위한 한 가지 방법이 제시된다. 우선, 연속적인 영상이 PC용 영상카메라를 통하여 획득되고 그 입술움직임 여부가 식별된다. 다음으로, 입술움직임 영상신호 데이터는 공유메모리에 저장되어 음성인식 프로세서와 공유한다. 한편, 음성인식의 전처리 단계인 음성구간 검출과정에서 공유메모리에 저장되어진 데이터를 확인함으로써 화자의 발성에 의한 음향에너지인지의 여부가 입증된다. 최종적으로, 음성인식기와 영상처리기를 연동시켜 실험한 결과, 영상카메라에 대면해서 발성하면 음성인식 결과의 출력에 이르기까지 연동처리가 정상적으로 진행됨을 확인하였고, 영상카메라에 대면치 않고 발성하면 연동처리시스템이 그 음성인식 결과를 출력치 못함을 확인하였다. 또한, 오프라인하의 입술움직임 초기 특정값 및 템플릿 초기영상을 온라인하에서 추출된 입술움직임 초기특정값 및 템플릿 영상으로 대체함으로써 입술움직임 영상 추적의 변별력을 향상시켰다. 입술움직임 영상 추적과정을 시각적으로 확인하고 실시간으로 관련된 패러미터를 해석하기 위해 영상처리 테스트베드를 구축하였다, 음성과 영상처리 시스템의 연동결과 다양한 조명환경 하에서도 약 99.3%의 연동율을 나타냈다.

OAK DSP Core 기반 CSD17C00A에서의 G.726 ADPCM의 실시간 구현 (The Real-Time Implementation of G.726 ADPCM on OAK DSP Core based CSD17C00A)

  • 홍성훈;심민규;성유나;하정호
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1999년도 학술발표대회 논문집 제18권 1호
    • /
    • pp.52-55
    • /
    • 1999
  • 다중 전송율(16, 24, 32, 40kbps)을 제공하는 G.726 부호화기는 ADPCM (Adaptive Differential Pulse Code Modulation) 부호화법을 사용한다. 본논문에서는 G.726 ADPCM 알고리즘을 C&S Technology에서 개발한 음성 신호 처리를 위한 범용 DSP인 CSD17C00A 칩을 이용하여 실시간 응용이 가능하도록 구현하였다. G.726에 대한 양방향 평가는 Codec Loopback test을 통해 수행되었으며, W-T에서 제공한 테스트 절차에 따라 평가되었다. 본 논문에서 구현된 G.726 부호화기는 평균 11 MIPS의 계산 속도를 갖고, 프로그램 메모리 크기는 2.8K Words이고, 데이터 메모리 크기는 550 Words 를 필요로 하였다.

  • PDF

자동차 환경에서 Oak DSP 코어 기반 음성 인식 시스템 실시간 구현 (A Real-Time Implementation of Speech Recognition System Using Oak DSP core in the Car Noise Environment)

  • 우경호;양태영;이충용;윤대희;차일환
    • 음성과학
    • /
    • 제6권
    • /
    • pp.219-233
    • /
    • 1999
  • This paper presents a real-time implementation of a speaker independent speech recognition system based on a discrete hidden markov model(DHMM). This system is developed for a car navigation system to design on-chip VLSI system of speech recognition which is used by fixed point Oak DSP core of DSP GROUP LTD. We analyze recognition procedure with C language to implement fixed point real-time algorithms. Based on the analyses, we improve the algorithms which are possible to operate in real-time, and can verify the recognition result at the same time as speech ends, by processing all recognition routines within a frame. A car noise is the colored noise concentrated heavily on the low frequency segment under 400 Hz. For the noise robust processing, the high pass filtering and the liftering on the distance measure of feature vectors are applied to the recognition system. Recognition experiments on the twelve isolated command words were performed. The recognition rates of the baseline recognizer were 98.68% in a stopping situation and 80.7% in a running situation. Using the noise processing methods, the recognition rates were enhanced to 89.04% in a running situation.

  • PDF

A Study on Pitch Period Detection Algorithm Based on Rotation Transform of AMDF and Threshold

  • 서현수;김남호
    • 융합신호처리학회논문지
    • /
    • 제7권4호
    • /
    • pp.178-183
    • /
    • 2006
  • As a lot of researches on the speech signal processing are performed due to the recent rapid development of the information-communication technology. the pitch period is used as an important element to various speech signal application fields such as the speech recognition. speaker identification. speech analysis. or speech synthesis. A variety of algorithms for the time and the frequency domains related with such pitch period detection have been suggested. One of the pitch detection algorithms for the time domain. AMDF (average magnitude difference function) uses distance between two valley points as the calculated pitch period. However, it has a problem that the algorithm becomes complex in selecting the valley points for the pitch period detection. Therefore, in this paper we proposed the modified AMDF(M-AMDF) algorithm which recognizes the entire minimum valley points as the pitch period of the speech signal by using the rotation transform of AMDF. In addition, a threshold is set to the beginning portion of speech so that it can be used as the selection criteria for the pitch period. Moreover the proposed algorithm is compared with the conventional ones by means of the simulation, and presents better properties than others.

  • PDF